Okay Drich, one last go. If you won't go to the source, I'll bring it to you:
The bolded is what I meant by bias in the context of bidirectional neural networks. It is a direct effect of the bidirectional connectivity and if you read this post you may come to understand why. You say my understanding of this is biased, but I ask you, how can it be? It is a simply a description of a process.
Quote:There are two ways to think about association in the brain. The simple way is if neuron A synapses with neuron B and both A and B are firing at the same time, association is the strengthening of the synaptic strength between them as a function of the activity in both neurons, associating the two neurons so that B is more likely to fire if A is firing. This strengthening of the synaptic strengths is a function of both how active the post-synaptic neuron is (B) and the pre-synaptic neuron (A) and roughly translates to saying the more active B is the more the synaptic strength with increase relative to how active A is. This is roughly the way it is modelled in Emergent and that is based on the underlying biology. The second way of thinking about association, and the way I mostly think about it and where I see the beauty, builds on the first and is the effect of neurons X and Y both synapsing with neuron Z combined with bidirectional connectivity - which is to say neuron Z also synapses with neurons X and Y but going the other way. The same principles apply for the changing of the weights as for A and B above but in this case Z can be said to 'bind' X and Y... it becomes a detector for X and Y and is more likely to fire if both of them are active (according to the weight distribution between them). So in this usage, X becomes associated with Y through Z. And neural network dynamics will make it that the feedback synaptic strengths from Z to X and Y will tend to closely mirror the input synaptic strengths from X and Y to Z... so if I notate a synapse as X>Y then Y>X comes to be roughly the same. The effect of this is to create bias and allow for pattern completion, so if X fires and Y does not, but X still manages to activate Z even to a small degree then Z will start sending activation back to X and Y proportional to the weights. This will increase the activity of the already firing X, provide bias in Y to become active (so it requires less input current from elsewhere to activate), and increase the firing of Z as a result of the increased activity of X, which with then feedback more etc. In short it's a feedback loop that allows a whole context of related connections to be 'bootstrapped' into action very quickly based on very little initial input. To stop the feedback running away with itself, there are inhibitory interneurons which output inhibitory current to offset the excitatory current coming into neurons. So in this case Z would synapse with an inhibitory interneuron which in turn synapses with Z and fires proportionally to the activation of Z, stopping it from getting over-excited and allowing the network to settle into a stable state. So once you have a learned set of associations - spanning many related 'binding' neurons... what I call a context - this process allows it to become completely activated in leaps and bounds based on very small amounts of well-placed input from outside. Each little input will cause feedback activation that will cascade through the whole context, activating some neurons and biasing others as it goes, with the activated ones now contributing in the same way, speeding it up even more.
So if I want to recall some long distant memory, I think about it in these terms, knowing that if I can find the right input, it could bootstrap the whole thing in vivid detail. But if I keep coming up against a mental block, it means that the input I'm providing is peripheral to the context and is not well-placed enough to start a useful cascade. So for instance even if there is a whole tree of associations that I manage to activate, if they are only associated with another whole tree - the one I want to activate - through a small weight with a binding neuron, then however active that first tree is, it will not be able to trigger the binding neuron which would provide the bootstrapping feedback to the second tree. This is what I would call a red herring and is the equivalent of the context changes in Alice in Wonderland... a tear becomes a sea and the whole context changes with nothing else in common between the first scene and the second. So in that situation, remembering anything from the first scene will not activate anything from the second scene because the only point of entry is through that tear (or perhaps Alice herself... that would provide a little bit of activation, and thus bias, in the second scene) or something in the second scene.
The bolded is what I meant by bias in the context of bidirectional neural networks. It is a direct effect of the bidirectional connectivity and if you read this post you may come to understand why. You say my understanding of this is biased, but I ask you, how can it be? It is a simply a description of a process.