(November 28, 2012 at 4:45 am)Rayaan Wrote:(November 28, 2012 at 1:28 am)whateverist Wrote: Even so, is there any reason to think a program will ever experience subjective states or become self aware or have an identity crisis or be said to exhibit wisdom that does not directly reflect that of its programmer? I see that as a very different question than asking whether a machine could be programmed in such a way as to fool us into thinking these things are going on. I'm skeptical to the point of finding the idea absurd.
I don't think that a program or a machine could ever be self-aware or have a consciousness in the same way that we do. I think that the level of self-referentiality that exists in the human mind is much deeper than the level of self-referentiality that exists in machines and/or computer programs.
Interestingly, however, I've read in a few articles that a computer program can be thought to have consciousness - or a mind of it's own, so to speak - depending on how you define the word "conscious". There are certain definitions of consciousness in relation to computational properties that, when applied to the behavior of a computer program, the program itself can be considered to be "conscious" or "self-aware." You can see some of those definitions and their applications on page four in the link below:
Conscious Machines and Consciousness Oriented Programming
Thanks for the link. Gotta get off to work just now but the first sentence is telling:
"..we investigate the following question: how could you
write such computer programs that can work like conscious beings?"
How exactly do conscious beings 'work'? We understand lots about how a human body works and we've mapped the brain to find those places where a tweek will create a twitch or a severence can create a particular sort of dysfunction. Even so, I am not impressed that we are very close at all to understanding how conscious beings work.