Neural Networks

From Machine Learning
Jump to: navigation, search

Feed-Forward vs Back-Propagation

A feed-forward neural network is one where the information only goes forwards in the network, never backwards.

Back-propagation is a training algorithm which runs the values forwards through the neural network, calculates the error, and propagates the result back to the early layers. This refines the weighting on the neurons' inputs. Other weighting initialisation techniques are discussed here.

Recurrent Neural Networks (RNN)

RNNs use back-propagation to create a loop through its neural layers.

An RNN is where the output of a previous step is used as the input for the next step. This means the network for an RNN is often more compact than other neural networks, as it has a 'memory' of the previous output and can predict the next output in a sequence[1]. It is often used in sequence modelling, a practical example of this is machine translation (translating from one human language to another).

Convolutional Neural Networks (CNN)

Convolution is a mathematical operation on two functions ( f and g) to produce a third function that expresses how the shape of one is modified by the other. The term convolution refers to both the result function and to the process of computing it[2]. Convolution is an integral that expresses the amount of overlap of one function as it is shifted over another function . It therefore "blends" one function with another.[3]

CNNs are effectively applied to image recognition algorithms.


Deep Learning

Deep learning is a different approach to classic learning – it does not require labelled data as the deep levels and iterations will observe patterns and similarities within the data and create its own classifications. Machine learning requires labelled data to learn from[4].

A method is called "deep learning" when there is more than one hidden layer in the neural network.


Multi-Layer Perceptron

As the name implies, a multi-layer perceptron (MLP) has multiple hidden layers in its network, and is therefore a deep learning technique. It is a supervised machine learning technique. Each node, apart from the input nodes, has a nonlinear activation function [5] and uses back-propagation.

As explained above, the nodes in the first hidden layer are functions of the predictors in the input layer. Since an MLP allows a second (or more) layers, the nodes of the second hidden layer are functions of the nodes of the first hidden layer and so on.

Here is a good resource for understanding MLPs.

Implementing a multi-layer perceptron.

References