Our server costs ~$56 per month to run. Please consider donating or becoming a Patron to help keep the site running. Help us gain new members by following us on Twitter and liking our page on Facebook!
Current time: June 18, 2024, 6:09 pm

Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
When the AI wakes up the first thing it will do is...
#53
RE: When the AI wakes up the first thing it will do is...
(October 13, 2018 at 10:59 am)Jörmungandr Wrote:
(October 13, 2018 at 5:34 am)Gawdzilla Sama Wrote: You do realize that this won't be a planned event, right? That the AI will come about because of random events? Therefore what the AI "wants" or plans to do is completely unpredictable.

Bollocks.  When AI becomes aware it will do so in the pattern of human intelligence because that is the model and goal we are using for its development.  Awareness might arise elsewhere, but it and us would likely be oblivious to each other as we are looking in the wrong place.  We recognize AI as AI because of its similarity to ourselves.  That pretty much means that successful AI will almost certainly reason as we do.

Whether in principle we can recognize awareness developed elsewhere depends on what we assess to be the traits of awareness.   We can assess awareness loosely enough such that reasonably alien behavior can be seen as indifferentiable from awareness.   We can also assess awareness sufficiently strictly such that we can not ever be certain whether any other humans but ourselves are aware or merely operating an extremely elaborate condition response machine that mimicks awareness traits that would have satisfied looser assessments.

One way AI could become assessed to be aware like us is because it is the successful product of our purposed efforts to achieve that very thing.  But even in that case I suspect we would reach the point where AI can become self-aware by a loose assessment long before it has mimicked all aspects of our intelligence.   So even in that case we still have a lot of wiggle room to make the rest of the AI different after we first succeeeded in giving it awareness like ours.

(October 13, 2018 at 11:06 am)Mathilda Wrote:
(October 13, 2018 at 10:49 am)Anomalocaris Wrote: The problem with this analogy seems to me to be that an AI can not embody itself in the outside world except through  the interpretation layer of senses analogous to your Chinese person passing in Chinese texts.

Not at all. An AI could be embodied in the outside world as a robot. In which case it would have sensors and actuators.

You could come up with a means by which it could communicate. The communication would then be meaningful because it could be related to the core needs of the robot which will be similar to any other agent acting in the real world. After all, robots also need to maintain their power supply, stay safe and cope with unknown situations.

Much in the same way that I could communicate a word to you such as 'pain' and even though our personal experiences are different, they are at least similar enough for you to know what I am referring to because you will have experienced pain yourself. This can happen because we both have bodies inhabiting the same world.

Create a robot with aversive signals that get triggered when its body gets damaged and the robot will understand pain to mean something similar even though all three of us experience it in a different way.

A disembodied AI in a data centre will not be able to understand pain at all. Much in the same way that you stuck in a black box won't be able to relate a Chinese symbol to how it personally affects you. That's not to say that you can't learn Chinese, but only by relating it to what you already know. A disembodied AI knows nothing.

Searle's Chinese room problem has been discussed at length in the field of AI and has been used as an argument as to why strong AI can never exist. I wouldn't go that far but it does mean that strong AI cannot exist unless it is embodied somehow, whether physically or in a simulation.

I disagree.  If a disembodied AI somehow evolved in the data center it may be reasonable to suppose there is no reason for it to possess the circuitry to understand pain as associated with physical damage to a robotic body operating in a punishing physical environment.   But if it were purposedly designed and implemented in the data center then it can be given the circuitry to understand pain.

Pain circuitry does not respond only to damage to a robot body.  It responds to any incoming signal mimicking what a damaged robot body will generate.  So if we tell the Chinese person to manufacture alarming but factually baseless texts telling of grave trauma, a disembodied AI in a data center can conceivably not only understand pain, but can experience pain just like an AI hooked up to a real robot, and return commands to execute the appropriate response behavior.
Reply



Messages In This Thread
RE: When the AI wakes up the first thing it will do is... - by Anomalocaris - October 13, 2018 at 11:23 am

Possibly Related Threads...
Thread Author Replies Views Last Post
  cortana thing Katia81 5 1227 April 1, 2017 at 10:40 am
Last Post: Katia81



Users browsing this thread: 1 Guest(s)