(September 5, 2013 at 9:12 am)genkaus Wrote: Interestingly enough, I think that recursively treating these outputs as inputs may result in a faster "learning curve".I think in the case of colors, that's exactly how it would work. But when it comes to things like recognizing animals and playing 20 questions (my personal standard for AI), I'm not sure how you could do that. Hmmmmm.
Quote:If you are doing that, you might as well add in the capacity to randomly spawn additional neural networks within a particular ANN. Given that, it may not be surprising if it automatically ends up generating an ANN with experiential capacity.Intuitively, I would imagine that the closer you could come to simulating brain function, the most efficiently the system would be able to learn (and retain learning) in a complex environment. After that, you'd drop the physical constraints of humanity, and end up with something smarter than all humans.
But back to the philosophy-- even if I could program a computer to learn as humans do, and output responses with the same degree of predictability/inpredictability for any context, I'm still not confident that it would really be exeriencing the redness of an apple as redness.
I have thought of a way in which I might be convinced, though. If you could map the output of such a device TO the human brain, and end up with an extended awareness, then that could be a start.