(February 4, 2016 at 2:51 pm)Rhythm Wrote: What are your thoughts. How does the NN learn to learn, and do you think that we're nn blanks at birth (or to what extent would we or could we be?), or do you expect that like a pc...we're shipped with bundled utilities, lol? Or do you see us more as hard built to solve for x. A loaded gun waiting for the trigger to be pulled? Also, can you imagine a failure condition for an NN. Some scenario in which they are provided with the inputs but fail to comp? What would that be like, how could it happen?
I'll just reply to this now for the moment if that's okay and the rest later. I'm already late for a mafia game that has just started so I might not be around as much for a few days. Sorry about that
My book discusses this question. It argues that it would not be feasible for the genome to encode for specific representations in the brain... ie specific patterns of weights... and therefore that what is more likely is that it encodes for structural areas, specific types of connectivity, specific types of neurons, amounts of inhibition in areas etc. So in other words it gives us the structure, which in turn biases the types of dynamics it will produce and the types of learning that will occur. Like what I said above... the connectivity makes all the difference. So to the question of whether we start as blank slates, the answer is yes and no... we are most likely built with the network architecture in place but not the content. As to how the network learns to learn it's a self-organising network... learning happens at the level of individual synapses with no overseeing required... just as a function of how active the pre-synaptic neuron is compared to the post-synaptic neuron (as I explained in my post to benny about association). Every time a neuron fires it's learning. But at a variable rate of change, with some specialist areas learning much faster - for episodic memory etc - than the general slow rate used to model the world. So presented with the same environment, the weights will come to represent it no matter how random they initially start. But some parts of the brain are clearly structurally evolved for a particular purpose... the visual cortex for instance contains lots of different types of specialist neurons, in special arrangements, and 'tuned' in very specific ways... such as the line detectors etc that detect lines in different orientations so it may be the case that these areas are essentially hard-wired by evolution and are not actually about learning at all, with the learning occurring later down the line in association areas, which is pretty much the entire cerebral cortex. So I'm sorry about that, my problem is that I generalise neural networks too much... there are many different types of neurons and it's not necessarily the case that all of them learn. The type of neural network I'm most familiar with is the cortex, where associative learning happens along with bidirectionality, contexts, stereotypes, bias and all the rest... all the cool stuff associated with cognition.