Backpropagation

Backpropagation

Backpropagation, or propagation of error, is a common method of teaching artificial neural networks how to perform a given task. It was first described by Paul Werbos in 1974, but it wasn't until 1986, through the work of David E. Rumelhart, Geoffrey E. Hinton and Ronald J. Williams, that it gained recognition, and it led to a “renaissance” in the field of artificial neural network research.

It is a supervised learning method, and is an implementation of the Delta rule. It requires a teacher that knows, or can calculate, the desired output for any given input. It is most useful for feed-forward networks (networks that have no feedback, or simply, that have no connections that loop). The term is an abbreviation for "backwards propagation of errors". Backpropagation requires that the activation function used by the artificial neurons (or "nodes") is differentiable.

ummary

Summary of the technique:
# Present a training sample to the neural network.
# Compare the network's output to the desired output from that sample. Calculate the error in each output neuron.
# For each neuron, calculate what the output should have been, and a "scaling factor", how much lower or higher the output must be adjusted to match the desired output. This is the local error.
# Adjust the weights of each neuron to lower the local error.
# Assign "blame" for the local error to neurons at the previous level, giving greater responsibility to neurons connected by stronger weights.
# Repeat from step 3 on the neurons at the previous level, using each one's "blame" as its error..

Algorithm

Actual algorithm for a 3-layer network (only one hidden layer):

Initialize the weights in the network (often randomly) Do For each example e in the training set O = neural-net-output(network, e) ; forward pass T = teacher output for e Calculate error (T - O) at the output units Compute delta_wi for all weights from hidden layer to output layer ; backward pass Compute delta_wi for all weights from input layer to hidden layer ; backward pass continued Update the weights in the network Until all examples classified correctly or stopping criterion satisfied Return the network

As the algorithm's name implies, the errors (and therefore the learning) propagate backwards from the output nodes to the inner nodes. So technically speaking, backpropagation is used to calculate the gradient of the error of the network with respect to the network's modifiable weights. This gradient is almost always then used in a simple stochastic gradient descent algorithm to find weights that minimize the error. Often the term "backpropagation" is used in a more general sense, to refer to the entire procedure encompassing both the calculation of the gradient and its use in stochastic gradient descent. Backpropagation usually allows quick convergence on satisfactory local minima for error in the kind of networks to which it is suited.

It is important to note that backpropagation networks are necessarily multilayer perceptrons (usually with one input, one hidden, and one output layer). In order for the hidden layer to serve any useful function, multilayer networks must have non-linear activation functions for the multiple layers: a multilayer network using only linear activiation functions is equivalent to some single layer, linear network. Non-linear activation functions that are commonly used include the logistic function, the softmax function, and the gaussian function.

The backpropagation algorithm for calculating a gradient has been rediscovered a number of times, and is a special case of a more general technique called automatic differentiation in the reverse accumulation mode.

It is also closely related to the Gauss–Newton algorithm, and is also part of continuing research in neural backpropagation.

External links

* Chapter 7 [http://page.mi.fu-berlin.de/rojas/neural/chapter/K7.pdf The backpropagation algorithm] of [http://page.mi.fu-berlin.de/rojas/neural/index.html.html "Neural Networks - A Systematic Introduction"] by Raúl Rojas (ISBN 978-3540605058)
* [http://neurondotnet.freehostia.com NeuronDotNet - A modular implementation of artificial neural networks in C# along with sample applications]
* [http://www.codeproject.com/KB/recipes/BP.aspx Implementation of BackPropagation in C++]
* [http://www.codeproject.com/KB/cs/BackPropagationNeuralNet.aspx Implementation of BackPropagation in C#]
* [http://ai4r.rubyforge.org/neuralNetworks.html Implementation of BackPropagation in Ruby]
* [http://www.tek271.com/articles/neuralNet/IntoToNeuralNets.html Quick explanation of the backpropagation algorithm]
* [http://galaxy.agh.edu.pl/~vlsi/AI/backp_t_en/backprop.html Graphical explanation of the backpropagation algorithm]
* [http://www.speech.sri.com/people/anand/771/html/node37.html Concise explanation of the backpropagation algorithm using math notation]
* [http://fbim.fh-regensburg.de/~saj39122/jfroehl/diplom/e-13-text.html Detailed numeric demonstration of how backpropagation algorithms work]
* [http://en.wikiversity.org/wiki/Learning_and_Neural_Networks Backpropagation neural network tutorial at the Wikiversity]


Wikimedia Foundation. 2010.

Игры ⚽ Нужно сделать НИР?

Look at other dictionaries:

  • Backpropagation — oder auch Backpropagation of Error bzw. auch Fehlerrückführung[1] (auch Rückpropagierung) ist ein verbreitetes Verfahren für das Einlernen von künstlichen neuronalen Netzen. Formuliert wurde es zuerst 1974 durch Paul Werbos. Bekannt wurde es… …   Deutsch Wikipedia

  • Backpropagation — Backpropagation,   Lernmethode bei neuronalen Netzen …   Universal-Lexikon

  • Backpropagation-Algorithmus — Backpropagation oder auch Backpropagation of Error bzw. selten auch Fehlerrückführung[1] (auch Rückpropagierung) ist ein verbreitetes Verfahren für das Einlernen von künstlichen neuronalen Netzen. Formuliert wurde es zuerst 1974 durch Paul Werbos …   Deutsch Wikipedia

  • Backpropagation mit Trägheitsterm — Backpropagation oder auch Backpropagation of Error bzw. selten auch Fehlerrückführung[1] (auch Rückpropagierung) ist ein verbreitetes Verfahren für das Einlernen von künstlichen neuronalen Netzen. Formuliert wurde es zuerst 1974 durch Paul Werbos …   Deutsch Wikipedia

  • backpropagation — back·prop·a·ga·tion (băkʹprŏp ə gāʹshən) n. A common method of training a neural net in which the initial system output is compared to the desired output, and the system is adjusted until the difference between the two is minimized. * * * …   Universalium

  • backpropagation — noun a) An error correction technique used in neural networks b) A phenomenon in which the action potential of a neuron creates a voltage spike both at the end of the axon, as normally, and also back through to the dendrites from which much of… …   Wiktionary

  • Error backpropagation — Backpropagation oder auch Backpropagation of Error bzw. selten auch Fehlerrückführung[1] (auch Rückpropagierung) ist ein verbreitetes Verfahren für das Einlernen von künstlichen neuronalen Netzen. Formuliert wurde es zuerst 1974 durch Paul Werbos …   Deutsch Wikipedia

  • Neural backpropagation — This article is about the biological process. For the computer algorithm, see Backpropagation. Neural backpropagation is the phenomenon in which the action potential of a neuron creates a voltage spike both at the end of the axon (normal… …   Wikipedia

  • Back-Propagation — Backpropagation oder auch Backpropagation of Error bzw. selten auch Fehlerrückführung[1] (auch Rückpropagierung) ist ein verbreitetes Verfahren für das Einlernen von künstlichen neuronalen Netzen. Formuliert wurde es zuerst 1974 durch Paul Werbos …   Deutsch Wikipedia

  • Rückpropagierung — Backpropagation oder auch Backpropagation of Error bzw. selten auch Fehlerrückführung[1] (auch Rückpropagierung) ist ein verbreitetes Verfahren für das Einlernen von künstlichen neuronalen Netzen. Formuliert wurde es zuerst 1974 durch Paul Werbos …   Deutsch Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”