Will AI ever = conciousness or sentience?
November 28, 2012 at 1:28 am
(This post was last modified: November 28, 2012 at 1:38 am by Whateverist.)
On another website I visit it seems most people think our machines will be joining us as sentient beings in their own right any day now. I'd like to poll our larger group to find out what is the prevailing opinion here.
Can machines even be said to think? What counts as thinking? If the execution of decision trees is thinking then indeed they already do 'think'. A program that can diagnose diseases strikes me as very intelligent, but its intelligence of course reflects that of its programmer.
I'm not convinced that machines are or ever will be up to all the tasks me might describe as thinking, but I'll concede that at least some 'thinking' tasks can be accomplished by machines.
Even so, is there any reason to think a program will ever experience subjective states or become self aware or have an identity crisis or be said to exhibit wisdom that does not directly reflect that of its programmer? I see that as a very different question than asking whether a machine could be programmed in such a way as to fool us into thinking these things are going on. I'm skeptical to the point of finding the idea aburd.
I can't make an air tight argument against the possibility but I don't believe it is or ever will be possible. What do you think?
Can machines even be said to think? What counts as thinking? If the execution of decision trees is thinking then indeed they already do 'think'. A program that can diagnose diseases strikes me as very intelligent, but its intelligence of course reflects that of its programmer.
I'm not convinced that machines are or ever will be up to all the tasks me might describe as thinking, but I'll concede that at least some 'thinking' tasks can be accomplished by machines.
Even so, is there any reason to think a program will ever experience subjective states or become self aware or have an identity crisis or be said to exhibit wisdom that does not directly reflect that of its programmer? I see that as a very different question than asking whether a machine could be programmed in such a way as to fool us into thinking these things are going on. I'm skeptical to the point of finding the idea aburd.
I can't make an air tight argument against the possibility but I don't believe it is or ever will be possible. What do you think?