(November 28, 2012 at 1:28 am)whateverist Wrote: On another website I visit it seems most people think our machines will be joining us as sentient beings in their own right any day now. I'd like to poll our larger group to find out what is the prevailing opinion here.
Can machines even be said to think? What counts as thinking? If the execution of decision trees is thinking then indeed they already do 'think'. A program that can diagnose diseases strikes me as very intelligent, but its intelligence of course reflects that of its programmer.
I'm not convinced that machines are or ever will be up to all the tasks me might describe as thinking, but I'll concede that at least some 'thinking' tasks can be accomplished by machines.
Even so, is there any reason to think a program will ever experience subjective states or become self aware or have an identity crisis or be said to exhibit wisdom that does not directly reflect that of its programmer? I see that as a very different question than asking whether a machine could be programmed in such a way as to fool us into thinking these things are going on. I'm skeptical to the point of finding the idea aburd.
I can't make an air tight argument against the possibility but I don't believe it is or ever will be possible. What do you think?
As long as the machine follows the programming laid down by the programmer, I agree, it cannot be considered intelligent or sentient. But, if a machine is created with the capacity to override and write its own programs, then yes, it would become intelligent and sentient and the extent to which it can write its own programs would reflect the level of sentience it has.
For example, consider the example of a diagnostic machine which is fed the names of all known diseases, their symptoms and treatments into it. Now, if the machine becomes capable of adding new entries or reclassifying the previous ones then it is displaying intelligence or sentience.