The network is trained using a variant of Contrastive Hebbian Learning (CHL) [Movellan, 1990], modified to accommodate unsupervised learning (auto-association) and phase angles. In Contrastive Hebbian Learning, weight updates take place in positive (Hebbian) and negative (anti-Hebbian) phases. During the positive phase, the input and output units are clamped to training patterns, and the network is then allowed to settle. Weight updates during this phase are proportional to the product of the activations of the units on either end of the connection. During the negative phase, only the input units are clamped, and the network is then allowed to settle. Weight updates during this phase are proportional to the negative of the product of the activations on either end of the connection. For OUs, which have phase angles, the coupling function applied to the phase angle difference also enters in. The net weight update on the connection joining units i and j following the presentation of a training pattern is
where ( ) over a symbol refers to that quantity when the network has stabilized and the + and - superscripts refer to the positive and negative phases of learning. When the network generates the appropriate output for each training input, the changes from the positive and negative phases cancel each other out, and the weights no longer change. See the Appendix for the derivation of the the CHL rule for OUs.
CHL was originally described for hetero-associative learning. For auto-associative learning, there is no distinction between input and output units, so we must decide which units are clamped and which not clamped during the negative phase. One way to proceed is to randomly select input/output units to clamp during the negative phase, and it is this approach that we have followed with Playpen.
Demo 4 illustrates the two phases of learning in a simple network of OUs.