Demo post
This is a demo post and the purpose is just to see what can be done (in terms of post) with this jekyll theme.
1. Demo post
$a = 2$, Hello how are you. Some more changes. Some more changes.\(a\alpha = 2\) Let us take this improvement to indicate how many times did deployed website shows that –layout errror– 8th April or 15th commit: 2 :3 (writing some more to see if length effects)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
'''This is a random python code'''
from torch.optim import Optimizer
import torch
import math
class AltGDA(Optimizer):
def __init__(self,params,lr=1e-3):
if lr < 0.0:
raise ValueError
defaults = dict(lr=lr)
super(AltGDA,self).__init__(params,defaults)
def step(self, dLoss, gLoss, closure=None):
loss = None
if closure is not None:
with torch.enable_grad():
loss = closure()
dGroup = self.param_groups[0]
gGroup = self.param_groups[1]
dLoss.backward()
for p in dGroup['params']:
# boilerplate
if p.grad is None:
continue
grad = p.grad.data
if grad.is_sparse:
raise RuntimeError('Adam does not support sparse gradients, please consider SparseAdam instead')
# Algo
p.data = p.data - dGroup['lr']*grad
gLoss.backward()
for p in gGroup['params']:
# boilerplate
if p.grad is None:
continue
grad = p.grad.data
if grad.is_sparse:
raise RuntimeError('Adam does not support sparse gradients, please consider SparseAdam instead')
# Algorithm
p.data = p.data - gGroup['lr']*grad
return loss
2. Random 1
INSPIRATION DAY
This text is written in latex font. Humans don’t start their thinking from scratch every second. As you read this essay, you understand each word based on your understanding of previous words. You don’t throw everything away and start thinking from scratch again. Your thoughts have persistence.
Traditional neural networks can’t do this, and it seems like a major shortcoming. For example, imagine you want to classify what kind of event is happening at every point in a movie. It’s unclear how a traditional neural network could use its reasoning about previous events in the film to inform later ones.
Recurrent neural networks address this issue. They are networks with loops in them, allowing information to persist.
3. This is text in CMS
This text is written in CMS font without using html tagging. Humans don’t start their thinking from scratch every second. As you read this essay, you understand each word based on your understanding of previous words. You don’t throw everything away and start thinking from scratch again. Your thoughts have persistence. Traditional neural networks can’t do this, and it seems like a major shortcoming. For example, imagine you want to classify what kind of event is happening at every point in a movie. It’s unclear how a traditional neural network could use its reasoning about previous events in the film to inform later ones. Recurrent neural networks address this issue. They are networks with loops in them, allowing information to persist.
4. Random 2
Hey this is a prompt
5. Collapsing sections
Click here to see the implementation details
To Do list
### Hello **bold**Some code
- Hello
- How are you
6. Chess Game
White | Black |
---|---|
1. e4 | 1. e5 |
2. Nf3 | 2. Nc6 |
3. Bc4 | 3. Bc5 |
7. H2
7.1 H3
This is some random text in h3
7.1.1 H4
7.2 H3
8. Ending
This is just a demo post to test out features of jekyll