# Optimizers¶

class tinynet.optims.SGDOptimizer(lr, momentum=None)

In this class, we implement the stochastic gradient descent algorithm, which is used to update the parameters in a neural network. The algorithm is simple:

$w^{new} = w^{old}-\lambda \nabla$

where $$\lambda$$ is the preset learning rate, and $$\nabla$$ is the gradient.