(September 5, 2013 at 6:16 am)bennyboy Wrote: It's late, but I recognize you put a lot of working into your last post, so let me lead with a couple comments.. Consider this post an aside to the other stuff we've been talking about:
1) As far as I know, there's no "section" of the brain responsible for consciousness, though there are a few that seem to be required. To use a physical perspective on it right now, it really does seem to be the flow of information in the brain that gets experienced, i.e. no "seat" of consciousness. Correct me if I'm wrong, but that's how I remember it.
Point of concern - I'm not sure if incidentally referring to 'consciousness' here instead of 'experience' or if you see them as synonymous or if you are actually talking about consciousness - but I do see them as distinct concepts. This would require a deeper explanation of my views.
Awareness is a phrase which trivially refers to one entity being aware of something else. It can be regarded as something as simple as thermostat being aware of the temperature. For example, in your ANN example, the network is aware of whatever the inputs represent. I know this would seem very counter-intuitive and nothing like what we talk about when we refer to human awareness and here's why.
There are many possible levels and types of awareness. From the lowly thermostat to the Venus Flytrap being aware of the insect on it. That is environmental awareness. Then there is internal awareness - such as being aware of changes in physiology. The subjective awareness is yet another level where the entity is aware of what goes on inside itself.
I believe all these forms of awareness combined are what we call 'consciousness'. When we refer to a human being as a conscious being, we assume that he is aware of his external environment, his own physiology, his thoughts, his sensations, his memory, his imagination, his intelligence and so on and on and on. The problem is, we understand consciousness intuitively - which is why we often take it as a given and as one whole entity and don't even think of figuring out at what level of awareness is required for consciousness to exist. As a result we have philosophies from reductive materialism, which states there is no such thing as consciousness to pan-psychism which says everything has consciousness. Ofcourse, subjective awareness or experience is a significant part of this consciousness but there is more to consciousness than just that.
And this would explain why we can't point to a section of brain responsible for either consciousness or subjective experience. Like consciousness, there are different layers of subjective awareness - from awareness of awareness of a pin prick (not a typo - the repetition is intentional) to awareness of the most complicated collection of ideas. The 'section' here would be determined by function - not location.
(September 5, 2013 at 6:16 am)bennyboy Wrote: 2) The reason specific outputs are stipulated is that ANNs are very slow, especially if the results have to be matched and punished or rewarded by actual humans. However, if an internet-based project could get enough attention (e.g. a million users judging the artistic merit of a visual output 10 times per day), it might be possible to train a very complex system in a reasonable time. But as far as the mechanism goes, it doesn't "know" if its outputting visual information, or digitized sound, or whatever. So long as the end user can map the output to any kind of hardware, the training process can result in any desired kind of result.
Interestingly enough, I think that recursively treating these outputs as inputs may result in a faster "learning curve".
(September 5, 2013 at 6:16 am)bennyboy Wrote: 3) I'm not sure this backward propagation method is the best way to apply weights. I have in mind a simulated genetic model, where you'd spawn 1000 instances of an ANN, and discard (i.e. kill) each time the 80% of instances with the worst results. There would be no actual learning in that case, though-- just refinement through selection. But the system RUNNING all those simulations could certainly be said to be learning, I think, since thanks to the billions of aborted ANNs it would end up with an ouput that matched requirements.
In this model, the "survival" of the virtual organism depends on the statistical relationship of its output to its environment, which (and this is the important part) could CHANGE.
If you are doing that, you might as well add in the capacity to randomly spawn additional neural networks within a particular ANN. Given that, it may not be surprising if it automatically ends up generating an ANN with experiential capacity.