(May 10, 2012 at 2:50 pm)ChadWooters Wrote: Are you saying that a machine capable of simulating the outwards behaviors of a sentient being must have the inner experiences similar to our own?
No, I'm saying that before we start to judge the sentience of a machine, we must make sure that the mechanism required for inner experiences is in place.
I do believe that a machine would be incapable of correctly simulating the outward behavior of sentience without having the inner experiences, but I'm not using that as an argument.
(May 10, 2012 at 2:50 pm)ChadWooters Wrote: kay then we generally agree that subjective experiences can only be know by the sentient entity having them, i.e. “privileged access.” In my thought problem of the cyberlink, I was thinking of reports made by the human after severing the link. They’d say something like, “I felt the position of things, kind of like hearing, but more like a picture in my head.
That is what I was addressing. I'm saying that that testimony would be useless for determining subjective experiences or consciousness because, since that capacity is already a part of the human who was connected, there simply wouldn't be a way to show that the machine had a separate consciousness.
On a side note, I find this idea that a scientist would be able to create a completely sentient and intelligent machine, without understanding how that sentience comes about and being perpetually perplexed every time the machine gave an indication of being conscious, to be unrealistic. I'm reminded of the great Greek philosopher and inventor Mechanicles (form Aladdin), who was not only able to create sentient, intelligent and responsive machines out of nothing but gears and levers, but never showed any surprise at those properties. And whatever his other moral failings, he always treated them with a great deal of affection.