Weights on the connections joining units are learned as the network is trained on a set of patterns. Learning is through a variant of Hebbian learning known as Contrastive Hebbian Learning [MovellanMovellan1990]. Learning takes place in two phases. During the positive phase, an input pattern and the appropriate pattern are both clamped, the network is allowed to stabilize, and the weights are adjusted in proportion to the correlation between the activations of the connected units. During the negative phase, only an input pattern is clamped, the network is again allowed to stabilize, and the weights are adjusted in proportion to the anti-correlation between the activations of the connected units. When the patterns have been learned, the two changes cancel each other out because the network's behavior in the two phases is identical. That is, the network produces the desired output for a given input during the negative phase.