RE: Pleasure and Joy
September 5, 2013 at 6:16 am
(This post was last modified: September 5, 2013 at 6:17 am by bennyboy.)
It's late, but I recognize you put a lot of working into your last post, so let me lead with a couple comments.. Consider this post an aside to the other stuff we've been talking about:
1) As far as I know, there's no "section" of the brain responsible for consciousness, though there are a few that seem to be required. To use a physical perspective on it right now, it really does seem to be the flow of information in the brain that gets experienced, i.e. no "seat" of consciousness. Correct me if I'm wrong, but that's how I remember it.
2) The reason specific outputs are stipulated is that ANNs are very slow, especially if the results have to be matched and punished or rewarded by actual humans. However, if an internet-based project could get enough attention (e.g. a million users judging the artistic merit of a visual output 10 times per day), it might be possible to train a very complex system in a reasonable time. But as far as the mechanism goes, it doesn't "know" if its outputting visual information, or digitized sound, or whatever. So long as the end user can map the output to any kind of hardware, the training process can result in any desired kind of result.
3) I'm not sure this backward propagation method is the best way to apply weights. I have in mind a simulated genetic model, where you'd spawn 1000 instances of an ANN, and discard (i.e. kill) each time the 80% of instances with the worst results. There would be no actual learning in that case, though-- just refinement through selection. But the system RUNNING all those simulations could certainly be said to be learning, I think, since thanks to the billions of aborted ANNs it would end up with an ouput that matched requirements.
In this model, the "survival" of the virtual organism depends on the statistical relationship of its output to its environment, which (and this is the important part) could CHANGE.
1) As far as I know, there's no "section" of the brain responsible for consciousness, though there are a few that seem to be required. To use a physical perspective on it right now, it really does seem to be the flow of information in the brain that gets experienced, i.e. no "seat" of consciousness. Correct me if I'm wrong, but that's how I remember it.
2) The reason specific outputs are stipulated is that ANNs are very slow, especially if the results have to be matched and punished or rewarded by actual humans. However, if an internet-based project could get enough attention (e.g. a million users judging the artistic merit of a visual output 10 times per day), it might be possible to train a very complex system in a reasonable time. But as far as the mechanism goes, it doesn't "know" if its outputting visual information, or digitized sound, or whatever. So long as the end user can map the output to any kind of hardware, the training process can result in any desired kind of result.
3) I'm not sure this backward propagation method is the best way to apply weights. I have in mind a simulated genetic model, where you'd spawn 1000 instances of an ANN, and discard (i.e. kill) each time the 80% of instances with the worst results. There would be no actual learning in that case, though-- just refinement through selection. But the system RUNNING all those simulations could certainly be said to be learning, I think, since thanks to the billions of aborted ANNs it would end up with an ouput that matched requirements.
In this model, the "survival" of the virtual organism depends on the statistical relationship of its output to its environment, which (and this is the important part) could CHANGE.