(November 30, 2012 at 8:31 pm)whateverist Wrote: I think such a program has the capacity to perform the task of medical diagnosis more thoroughly and accurately than any human, and may well be able to access all the latest most relevant statistical data by way of the cloud. So in that sense I would say it is highly intelligent and potentially to a degree exceeding our own for the task for which it has been programmed. It isn't clear to me how its capacity to update and integrate new data, though highly intelligent, would ever amount to sentience.
I suspect I'm more skeptical because I play no computer games and so don't spend much time in virtual environments. Of course, this is a virtual environment but I'm not the only human here .. or am I? hock:
(November 30, 2012 at 8:43 pm)jonb Wrote: That would not work for me as a definition of sentience. You could set up a programme that grouped objects by various characteristics and could look for new connections and be able to assess new objects.
As far as I can see the only way of telling if a thing has a self, is seeing whether it is selfish.
In my last post, I did not give sufficient thought to the topic and thus mistakenly equated sentience with consciousness/intelligence. Havng thought about those in more detail, here are my views in a more detailed form.
Consciousness: As I understand it, it simply means awareness. For example, animals and plants can be said to be conscious if they can perceive external phenomena. The easiest way to show that something is conscious is by showing that it reacts to that external phenomena. Even some of the current machines would qualify for this definition of consciousness. However, being aware of the awareness itself may not be necessary. Thus, there would be different levels of consciousness - such as the sunflower turning towards the sun to the very complex, such as humans.
Sentience: I understand it to mean the "ability to feel" and this is how I understand it. Consciousness can be divided into two forms - external and internal. The external consciousness, i.e., being aware of objects around you is established pretty easily, since everyone can perceive the said objects. Even the physical sensations, such as pain, hunger, arousal etc. can be called external since they are of the body and not of the mind. Sentience comes with the awareness of this awareness.
Conceptually, we've already made machines and computers that are aware of external and internal processes. Technically, we can equate indication of low battery with hunger and slow processing with fatigue etc. What makes us different is that our awareness of this reaction - the sensations of hunger, pain etc., takes a particular form that we cannot objectively verify with others very easily. This is our mind being aware of what's going on inside of it.
Is the same level of complexity possible for machines? Why not? Our limitation here is that when it comes to subjective experiences, we find it difficult to imagine it in any other form than what we already have. It is equivalent to trying to describe sights to a blind man or sounds to a deaf one. The reason why humans agree on so many experiences in this subject is because we have common systems of their perception. If alternate systems are developed for machines, then they may not "feel" the hunger or pain the same way we do, but they would feel it nonetheless.
As for detection of sentience, we come back to the old Philosophical Zombie problem. That is, if everyone around you loses the capacity to be sentient but continues as if it is due to biological wiring, how would you know? Thankfully, we are making progresses in that field by actually studying the brain. By finding out which areas are responsible for which forms of awareness, we can tell if something is sentient by seeing if it actually feels emotions. The same principle can be applied to machines as well - if and when they do become sentient.