Back-Propagation Network

Back-propagation neural network is a powerful tool for finding patterns, forecasting, and qualitative analysis. The "back-propagation network" is named so because of the learning algorithm, in which an error is propagated from the output layer to the input layer, that is, in the direction opposite to the signal propagation at the normal functioning of the network.

The back-propagation neural network consists of several layers of neurons with each neuron of the i layer connected to each neuron of the i+1 layer.

In general, learning a neural network is reduced to finding a functional dependency Y=F(X) where X is an input vector, and Y is an output vector. In general, such a task, with a limited set of input data, has an infinite number of solutions. To limit the search space, the least-squares method is used to minimize the criterion function of the neural network error on learning:

,

where:

The neural network is learned with the gradient descent method, that is, at each iteration the weight is changed according to the formula:

, (1)

where h is a parameter that determines the learning rate.

, (2)

where:

,

where:

And the multiplier:

,

where xi is a value of the i-th neuron input.

Consider the definition of the first multiplier of the formula (2):

,

where k is a number of neurons in the n+1 layer.

Introduce an auxiliary variable:

Then, a recursive formula can be defined for determining the n-th layer, if the value of the (n+1)-th layer is known:

(3)

Calculations for the last layer of the neural network are not difficult since the target vector is known. The target vector is a vector of the values to be provided by the network at the given set of input values:

(4)

Write the formula (1) in the expanded form:

(5)

Consider a complete neural network learning algorithm:

  1. Feed one of the required images to the network input and define the neuron output values.

  2. Calculate for the output layer of the neural network by the formula (4) and calculate changes of weights of the N output layer by the formula (5).

  3. Calculate by formulas (3) and (5) for the remaining layers of the network, where n = N-1..1.

  4. Adjust all the neural network weights:

  1. If the error is significant, go to step 1.

See also:

Library of Methods and Models | Fill from Example | ISmBackPropagation