RE: When the AI wakes up the first thing it will do is...
October 13, 2018 at 11:06 am
(This post was last modified: October 13, 2018 at 11:09 am by I_am_not_mafia.)
(October 13, 2018 at 10:49 am)Anomalocaris Wrote: The problem with this analogy seems to me to be that an AI can not embody itself in the outside world except through the interpretation layer of senses analogous to your Chinese person passing in Chinese texts.
Not at all. An AI could be embodied in the outside world as a robot. In which case it would have sensors and actuators.
You could come up with a means by which it could communicate. The communication would then be meaningful because it could be related to the core needs of the robot which will be similar to any other agent acting in the real world. After all, robots also need to maintain their power supply, stay safe and cope with unknown situations.
Much in the same way that I could communicate a word to you such as 'pain' and even though our personal experiences are different, they are at least similar enough for you to know what I am referring to because you will have experienced pain yourself. This can happen because we both have bodies inhabiting the same world.
Create a robot with aversive signals that get triggered when its body gets damaged and the robot will understand pain to mean something similar even though all three of us experience it in a different way.
A disembodied AI in a data centre will not be able to understand pain at all. Much in the same way that you stuck in a black box won't be able to relate a Chinese symbol to how it personally affects you. That's not to say that you can't learn Chinese, but only by relating it to what you already know. A disembodied AI knows nothing.
Searle's Chinese room problem has been discussed at length in the field of AI and has been used as an argument as to why strong AI can never exist. I wouldn't go that far but it does mean that strong AI cannot exist unless it is embodied somehow, whether physically or in a simulation.