# Layers¶

class tinynet.layers.Conv2D(name, input_dim, n_filter, h_filter, w_filter, stride, padding)

Conv2D performs convolutional operation with given input.

backward(in_gradient)

> this function needs to be overridden.

forward(input)

> this function needs to be overridden. compute the forward pass

class tinynet.layers.Deconv2D(name, input_dim, n_filters, h_filter, w_filter, stride, dilation=1, padding=0)

Deconv2D performs deconvolution operation, or tranposed convolution.

backward(in_gradient)

This function is not needed in computation, at least right now.

forward(input)

> this function needs to be overridden. compute the forward pass

class tinynet.layers.Dropout(name, probability)

Dropout Layer randomly drop several nodes.

backward(in_gradient)

> this function needs to be overridden.

forward(input)

> this function needs to be overridden. compute the forward pass

class tinynet.layers.Flatten(name)

Flatten layer reads an ndarray as input, and reshape it into a 1-d vector.

backward(in_gradient)

> this function needs to be overridden.

forward(input)

> this function needs to be overridden. compute the forward pass

class tinynet.layers.Linear(name, input_dim, output_dim)

Linear layer performs fully connected operation.

backward(in_gradient)

In the backward pass, we compute the gradient with respect to $$w$$, $$b$$, and $$x$$.

We have:

$\frac{\partial l}{\partial w} = \frac{\partial l}{\partial y}\frac{\partial y}{\partial w}=\frac{\partial l}{\partial y} x$
forward(input)

The forward pass of fully connected layer is given by $$f(x)=wx+b$$.

class tinynet.layers.MaxPool2D(name, input_dim, size, stride, return_index=False)

Perform Max pooling, i.e. select the max item in a sliding window.

backward(in_gradient)

> this function needs to be overridden.

forward(input)

> this function needs to be overridden. compute the forward pass

class tinynet.layers.ReLu(name)

ReLu layer performs rectifier linear unit opertaion.

backward(in_gradient)

> this function needs to be overridden.

forward(input)

In the forward pass, the output is defined as $$y=max(0, x)$$.

class tinynet.layers.Softmax(name='softmax', axis=1, eps=1e-10)

Softmax layer returns the probability proportional to the exponentials of the input number.

backward(in_gradient)

Important: The actual backward gradient is not $$1$$.

The reason why we pass the gradient directly to previous layer is: since we know the formula is pretty straightforward when softmax is being used together with cross entropy loss (see theoretical induction), we compute the gradient in the cross entropy loss function, so that we could reduce the complexity, and increase the computational stabilities.

forward(input)

Some computational stability tricks here. > TODO: to add the tricks