Our server costs ~$56 per month to run. Please consider donating or becoming a Patron to help keep the site running. Help us gain new members by following us on Twitter and liking our page on Facebook!
Current time: April 23, 2024, 8:03 pm

Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
When the AI wakes up the first thing it will do is...
#51
RE: When the AI wakes up the first thing it will do is...
Or not. I'll wait and see.
Reply
#52
RE: When the AI wakes up the first thing it will do is...
(October 13, 2018 at 10:49 am)Anomalocaris Wrote: The problem with this analogy seems to me to be that an AI can not embody itself in the outside world except through  the interpretation layer of senses analogous to your Chinese person passing in Chinese texts.

Not at all. An AI could be embodied in the outside world as a robot. In which case it would have sensors and actuators.

You could come up with a means by which it could communicate. The communication would then be meaningful because it could be related to the core needs of the robot which will be similar to any other agent acting in the real world. After all, robots also need to maintain their power supply, stay safe and cope with unknown situations.

Much in the same way that I could communicate a word to you such as 'pain' and even though our personal experiences are different, they are at least similar enough for you to know what I am referring to because you will have experienced pain yourself. This can happen because we both have bodies inhabiting the same world.

Create a robot with aversive signals that get triggered when its body gets damaged and the robot will understand pain to mean something similar even though all three of us experience it in a different way.

A disembodied AI in a data centre will not be able to understand pain at all. Much in the same way that you stuck in a black box won't be able to relate a Chinese symbol to how it personally affects you. That's not to say that you can't learn Chinese, but only by relating it to what you already know. A disembodied AI knows nothing.

Searle's Chinese room problem has been discussed at length in the field of AI and has been used as an argument as to why strong AI can never exist. I wouldn't go that far but it does mean that strong AI cannot exist unless it is embodied somehow, whether physically or in a simulation.
Reply
#53
RE: When the AI wakes up the first thing it will do is...
(October 13, 2018 at 10:59 am)Jörmungandr Wrote:
(October 13, 2018 at 5:34 am)Gawdzilla Sama Wrote: You do realize that this won't be a planned event, right? That the AI will come about because of random events? Therefore what the AI "wants" or plans to do is completely unpredictable.

Bollocks.  When AI becomes aware it will do so in the pattern of human intelligence because that is the model and goal we are using for its development.  Awareness might arise elsewhere, but it and us would likely be oblivious to each other as we are looking in the wrong place.  We recognize AI as AI because of its similarity to ourselves.  That pretty much means that successful AI will almost certainly reason as we do.

Whether in principle we can recognize awareness developed elsewhere depends on what we assess to be the traits of awareness.   We can assess awareness loosely enough such that reasonably alien behavior can be seen as indifferentiable from awareness.   We can also assess awareness sufficiently strictly such that we can not ever be certain whether any other humans but ourselves are aware or merely operating an extremely elaborate condition response machine that mimicks awareness traits that would have satisfied looser assessments.

One way AI could become assessed to be aware like us is because it is the successful product of our purposed efforts to achieve that very thing.  But even in that case I suspect we would reach the point where AI can become self-aware by a loose assessment long before it has mimicked all aspects of our intelligence.   So even in that case we still have a lot of wiggle room to make the rest of the AI different after we first succeeeded in giving it awareness like ours.

(October 13, 2018 at 11:06 am)Mathilda Wrote:
(October 13, 2018 at 10:49 am)Anomalocaris Wrote: The problem with this analogy seems to me to be that an AI can not embody itself in the outside world except through  the interpretation layer of senses analogous to your Chinese person passing in Chinese texts.

Not at all. An AI could be embodied in the outside world as a robot. In which case it would have sensors and actuators.

You could come up with a means by which it could communicate. The communication would then be meaningful because it could be related to the core needs of the robot which will be similar to any other agent acting in the real world. After all, robots also need to maintain their power supply, stay safe and cope with unknown situations.

Much in the same way that I could communicate a word to you such as 'pain' and even though our personal experiences are different, they are at least similar enough for you to know what I am referring to because you will have experienced pain yourself. This can happen because we both have bodies inhabiting the same world.

Create a robot with aversive signals that get triggered when its body gets damaged and the robot will understand pain to mean something similar even though all three of us experience it in a different way.

A disembodied AI in a data centre will not be able to understand pain at all. Much in the same way that you stuck in a black box won't be able to relate a Chinese symbol to how it personally affects you. That's not to say that you can't learn Chinese, but only by relating it to what you already know. A disembodied AI knows nothing.

Searle's Chinese room problem has been discussed at length in the field of AI and has been used as an argument as to why strong AI can never exist. I wouldn't go that far but it does mean that strong AI cannot exist unless it is embodied somehow, whether physically or in a simulation.

I disagree.  If a disembodied AI somehow evolved in the data center it may be reasonable to suppose there is no reason for it to possess the circuitry to understand pain as associated with physical damage to a robotic body operating in a punishing physical environment.   But if it were purposedly designed and implemented in the data center then it can be given the circuitry to understand pain.

Pain circuitry does not respond only to damage to a robot body.  It responds to any incoming signal mimicking what a damaged robot body will generate.  So if we tell the Chinese person to manufacture alarming but factually baseless texts telling of grave trauma, a disembodied AI in a data center can conceivably not only understand pain, but can experience pain just like an AI hooked up to a real robot, and return commands to execute the appropriate response behavior.
Reply
#54
RE: When the AI wakes up the first thing it will do is...
Remember when robots just shuffled around

https://www.youtube.com/watch?v=LikxFZZO2sk

https://www.youtube.com/watch?v=HTPIED6jUdU



You can fix ignorance, you can't fix stupid.

Tinkety Tonk and down with the Nazis.




 








Reply
#55
RE: When the AI wakes up the first thing it will do is...
In Avengers: Age of Ultron, an AI escaped into the internet, built itself a whole army of robot selves, and tried to destroy humanity.

Plus he sounded like James Spader.
Reply
#56
RE: When the AI wakes up the first thing it will do is...
(October 13, 2018 at 11:23 am)Anomalocaris Wrote: I disagree.  If a disembodied AI somehow evolved in the data center it may be reasonable to suppose there is no reason for it to possess the circuitry to understand pain as associated with physical damage to a robotic body operating in a punishing physical environment.   But if it were purposedly designed and implemented in the data center then it can be given the circuitry to understand pain.

The whole point of Searle's Chinese Room problem is to demonstrate that what you are trying to do is impossible no matter how close you get to human level intelligence. There is no magic bullet that will make it happen just because it is more advanced.

If a human can't do it in place of a computer then how can we hope that a computer can do it?

Humans have circuitry to understand pain, but these neural circuits won't get triggered by looking at arbitrary Chinese symbols.


(October 13, 2018 at 11:23 am)Anomalocaris Wrote: Pain circuitry does not respond only to damage to a robot body. It responds to any incoming signal mimicking what a damaged robot body will generate. So if we tell the Chinese person to manufacture alarming but factually baseless texts telling of grave trauma, a disembodied AI in a data center can conceivably not only understand pain, but can experience pain just like an AI hooked up to a real robot, and return commands to execute the appropriate response behavior.

How does that work then? How can you manufacture alarming but factually baseless texts telling of grave trauma to cause pain to a disembodied AI?

How would you give circuitry to a disembodied AI in a data centre to understand pain? What would trigger the pain? Why? What would the effect of this be? How can you have pain without a sense of self? (which requires being embodied). What would the function of pain be?

It sounds easy in theory when you talk about these things in general but in practice it's impossible to know where to even start without narrowing it down a lot more and defining key concepts.

This relates to another issue in AI. Just because we label some function in an AI as being something like emotion, pain, consciousness etc it does not necessarily mean that it is the same as what we have labelled it.

So we could program a module, call it pain and hard code a certain reaction to it. But is it actually pain or merely a hard-coded subroutine? The answer is whether it performs the same functionality. And that means being embodied, whether physically or in a simulated world.
Reply
#57
RE: When the AI wakes up the first thing it will do is...
(October 13, 2018 at 1:13 pm)Mathilda Wrote:
(October 13, 2018 at 11:23 am)Anomalocaris Wrote: I disagree.  If a disembodied AI somehow evolved in the data center it may be reasonable to suppose there is no reason for it to possess the circuitry to understand pain as associated with physical damage to a robotic body operating in a punishing physical environment.   But if it were purposedly designed and implemented in the data center then it can be given the circuitry to understand pain.

The whole point of Searle's Chinese Room problem is to demonstrate that what you are trying to do is impossible no matter how close you get to human level intelligence. There is no magic bullet that will make it happen just because it is more advanced.

If a human can't do it in place of a computer then how can we hope that a computer can do it?

Humans have circuitry to understand pain, but these neural circuits won't get triggered by looking at arbitrary Chinese symbols.


(October 13, 2018 at 11:23 am)Anomalocaris Wrote: Pain circuitry does not respond only to damage to a robot body.  It responds to any incoming signal mimicking what a damaged robot body will generate.  So if we tell the Chinese person to manufacture alarming but factually baseless texts telling of grave trauma, a disembodied AI in a data center can conceivably not only understand pain, but can experience pain just like an AI hooked up to a real robot, and return commands to execute the appropriate response behavior.

How does that work then? How can you manufacture alarming but factually baseless texts telling of grave trauma to cause pain to a disembodied AI?

How would you give circuitry to a disembodied AI in a data centre to understand pain? What would trigger the pain? Why? What would the effect of this be? How can you have pain without a sense of self? (which requires being embodied). What would the function of pain be?

It sounds easy in theory when you talk about these things in general but in practice it's impossible to know where to even start without narrowing it down a lot more and defining key concepts.

This relates to another issue in AI. Just because we label some function in an AI as being something like emotion, pain, consciousness etc it does not necessarily mean that it is the same as what we have labelled it.

So we could program a module, call it pain and hard code a certain reaction to it. But is it actually pain or merely a hard-coded subroutine? The answer is whether it performs the same functionality. And that means being embodied, whether physically or in a simulated world.

Whether human neural pain circuits are triggered by looking at unfamiliar Chinese characters depends on who tried to write the Chinese characters, I suppose.  I have seen some scribblings whose form alone drew shudders and aversion.

But seriously, We are talking about different things, I think.  I think the Chinese room example illustrates merely that it is difficult to differentiate a machine programmed to give awareness-like responses from awareness.   But i suppose if we can actually trace all the operations involved in producing the response, perhaps we can have a better basis on which to assess whether the sum of the processes merely simulate awareness or was likely actually aware than merely assessing the end results of the process.  

The answer I tried to give addresses whether an awareness can be created that resides solely in an artificial digital environment without direct interaction with the real world outside.  I am presuming that if awareness can arise once in biochemical circuitry in us,  it can arise again either in a similar form of circuitry or at least in different form of circuitry that replicate the self aware biochemical cirsuitry’s salient functions down to minute detail.  So I proceed from the assumption self awareness can arise or be engineered.

You might argue how one could tell the pain circuits actually feel pain and do not just mimic a pained response if it didn’t evolve likely we did through interaction with real environment to feel pain? Well, I will assert the process of evolution is not magical and if the end result is aware and feel pain, then the mechanism of that end result if replicated accurately in all minute salient functions will also be aware and feel pain.

If we replicate the circuits in a biochemical dataroom, and then feed it fanricated input purposedly replicating all the real world inputs the original would have received interacting with the real world, how is that not an aware AI that exist purely in an Data Room without any robot body?
Reply



Possibly Related Threads...
Thread Author Replies Views Last Post
  cortana thing Katia81 5 1213 April 1, 2017 at 10:40 am
Last Post: Katia81



Users browsing this thread: 1 Guest(s)