RE: Pleasure and Joy
September 6, 2013 at 2:41 am
(This post was last modified: September 6, 2013 at 2:42 am by genkaus.)
(September 5, 2013 at 7:15 pm)bennyboy Wrote: I think in the case of colors, that's exactly how it would work. But when it comes to things like recognizing animals and playing 20 questions (my personal standard for AI), I'm not sure how you could do that. Hmmmmm.
I think that for an AI the desired behavior would not be recognition but concept formation.
In your example, the AI would have preset categories of different colors - red, blue, yellow etc. and it would simply match the given input to the color. Take this a step further - with preset categories of physical structure, behavior, DNA etc. and you have something capable of identifying animals. The the AI isn't 'learning' anything here. That would require the capacity to create those categories based on the input. If after being shown different inputs, it can identify different traits (color, shape, texture, sound) and if it can then categorize them independently from the input, then that would be a better indicator of intelligence.
(September 5, 2013 at 7:15 pm)bennyboy Wrote: Intuitively, I would imagine that the closer you could come to simulating brain function, the most efficiently the system would be able to learn (and retain learning) in a complex environment. After that, you'd drop the physical constraints of humanity, and end up with something smarter than all humans.
But back to the philosophy-- even if I could program a computer to learn as humans do, and output responses with the same degree of predictability/inpredictability for any context, I'm still not confident that it would really be exeriencing the redness of an apple as redness.
I have thought of a way in which I might be convinced, though. If you could map the output of such a device TO the human brain, and end up with an extended awareness, then that could be a start.
While there is no such device yet, the experimental premise you've laid out can answer your other question - about the nature of human experience. Now, if there was such a device that could 'connect' your brain to another person's brain, then - if there is any shared experience - then that should establish that experience is a brain function and it is possible for another person's experience to be directly accessible to you, correct?
(September 5, 2013 at 8:11 pm)ChadWooters Wrote: Genkaus's arguement in summary:
I am sentient and engage in certain behaviors.
Other is behaving in a certain way.
Thus Other is sentient.
or
I am human and can run.
Fido can run.
Thus Fido is human.
You missed a line in both.
I am sentient and engage in certain behaviors.
I find that sentience is necessary for those behaviors.
Other is behaving in the same certain way.
Thus Other is sentient.
or
I am human and can run.
I find that being human is necessary to run.
Fido can run.
Thus Fido is human.