Posts: 13122
Threads: 130
Joined: October 18, 2014
Reputation:
55
RE: Can we build AI without losing control over it? | Sam Harris
November 4, 2016 at 6:12 pm
(This post was last modified: November 4, 2016 at 6:12 pm by abaris.)
(November 4, 2016 at 6:05 pm)Alasdair Ham Wrote: Well no, that isn't part of his argument so it's not important.
We just need to assume that it's inevitable eventually.
AI is getting better and better and one day will be so much smarter than us that we will be insignificant to it.
Why exactly do we have to assume that? Because Harris says so? Is he privy to yet unknown information, to predict how a yet to developed AI will function at some point in the future? Did he adress any of the crucial technological questions or did he just ramble on about a fictional superintelligence treating us like ants? Crucial as in the most basic questions. On what kind of energy should this AI run and on what technology other than the one provided by it's creators should it function. Or how it could be possibly able to beat the hard- and software limitations.
I certainly hope you're not going into the intelligent air direction by now.
Posts: 67172
Threads: 140
Joined: June 28, 2011
Reputation:
162
RE: Can we build AI without losing control over it? | Sam Harris
November 4, 2016 at 6:12 pm
(This post was last modified: November 4, 2016 at 6:24 pm by The Grand Nudger.)
(November 4, 2016 at 5:46 pm)Mathilda Wrote: (November 4, 2016 at 5:30 pm)Excited Penguin Wrote: No, the A. G. I. begins improving itself at a rate much faster than a human and we get an extremely smart mind in a box. If you think its lack of limbs is going to be a problem. . . I have no idea why you would think that. If I were 10.000 times smarter than you, you wouldn't fear me because of my body.
And how would AGI improve itself exponentially faster than a human?
If you were 10,000 times smarter than me, why would I have to fear you? You;d be a fool not to be afraid of an above average human being who found you to be in conflict with one of it's goals....the earth is littered with the bones of people who didn;t properly identify risk, let alone a superintelligent machine....and the means by which ai would improve itself faster than humans has been discussed not only in the video but many times in the thread - on the basis of speed alone. It could do intellectual work many, many times faster than even a commensurately intelligent human being is -capable- of doing. This intellectual work might just be ai research...how could that have escaped you as an ai researcher...don;t you already use machine intelligence for precisely this purpose? I can't imagine that you're sitting there with paper and pencil running simulations by hand.
(November 4, 2016 at 5:47 pm)abaris Wrote: Here's another one. Every AI will only be a product of the technology available at the time of it's creation. It will never be a species being subject to evolution, since the technology will consist of hardware and software of the time of creation. It's impossible to evolve beyond the technological limits. We didn't -evolve- beyond our own limits, we innovated beyond them - so it's clearly not impossible. The same is being proposed for superintelligent ai, cheifly since it's being designed for precisely the attribute that allowed us to do the very same thing..except that it's already got an innate advantage on us, being so much faster at it than we are, already...and so technically does not have to be smarter than us to be more intellectually productive than than us. As a point of fact, it could be a million times "less smart", however we'd figure that one out...and it would be commensurate in productivity to a human being.
The machine that designs machines. Now, a to the post above.. we don't have to assume that, he lays out his argument plainly and clearly, brings up objections himself, and invites a person to do the same. Energy is bordering on a complete dodge. Lack of energy hasn't stopped us yet, nor has it stopped our own intelligence machines, our brains. Dealing with surplus energy in the form of heat is a more technical and practical concern. We've got the juice, and could get a hell of alot more of it..if that was the problem. Particularly if we were plugging in a machine that could do 20k years of human intellect level research in a week.......not even based on it being super smart, just super fast - like our machines already are. Harris describes this as the "winner take all" scenario that ai would effect upon us. To be second is to be obsolete. To be first is to win the world.
I am the Infantry. I am my country’s strength in war, her deterrent in peace. I am the heart of the fight… wherever, whenever. I carry America’s faith and honor against her enemies. I am the Queen of Battle. I am what my country expects me to be, the best trained Soldier in the world. In the race for victory, I am swift, determined, and courageous, armed with a fierce will to win. Never will I fail my country’s trust. Always I fight on…through the foe, to the objective, to triumph overall. If necessary, I will fight to my death. By my steadfast courage, I have won more than 200 years of freedom. I yield not to weakness, to hunger, to cowardice, to fatigue, to superior odds, For I am mentally tough, physically strong, and morally straight. I forsake not, my country, my mission, my comrades, my sacred duty. I am relentless. I am always there, now and forever. I AM THE INFANTRY! FOLLOW ME!
Posts: 7392
Threads: 53
Joined: January 15, 2015
Reputation:
88
RE: Can we build AI without losing control over it? | Sam Harris
November 4, 2016 at 6:14 pm
(November 4, 2016 at 5:47 pm)Alasdair Ham Wrote: (November 4, 2016 at 5:38 pm)abaris Wrote: You didn't adress the most important parts of my post. Why's that?
To repeat in a simple chain of words: Casing, processing power, energy, the ability to maintain itself, access to the necessary components, functioning at full capacity while damaged.
How is this the most important part?
The premise is already that AI is getting better and better... I don't see how it's going to happen is relevant. There are numerous ways and numerous resources.
The sun doesn't have unlimited energy but it's not exactly going to run out.
It is critically important. These are limitations upon the AI. Computers are rigid and fragile. natural intelligence and bodies are flexible and robust. They also heal.
Any intelligence, whether artificial or natural, can only be evaluated in a context or particular environment. We see dogs as stupid compared to humans in a human environment, yet they can smell a far greater range than we can. Enough to sniff out drugs in luggage or to sense the onset of a seizure. A dog or cat would survive an apocalypse far better than say a chess grandmaster.
Posts: 43162
Threads: 720
Joined: September 21, 2008
Reputation:
133
RE: Can we build AI without losing control over it? | Sam Harris
November 4, 2016 at 6:15 pm
(This post was last modified: November 4, 2016 at 6:16 pm by Edwardo Piet.)
@ Abaris
I'm not saying we intrinsically have to assume his premises. I'm saying we have to assume his premises in order for his conclusion to follow from his premises.
If you don't think A.I. is going to keep progressing rather than come to a complete stop then fine, I'm not surprised you reject the argument.
It's quite obvious to me that just because A.I. is making very little progress doesn't mean it's not making any progress at all or that it's going to stop progressing.
Why would there be a limit to it any more than there's a limit to other ways computers are improving?
Posts: 7392
Threads: 53
Joined: January 15, 2015
Reputation:
88
RE: Can we build AI without losing control over it? | Sam Harris
November 4, 2016 at 6:16 pm
(November 4, 2016 at 5:50 pm)Alasdair Ham Wrote: Our intellect compared to ants is a LOT smarter than Einstein's or Hawking's compared to us.
We're talking about an AI so intelligent that it would have no reason to give a shit about us and see us as pests just like a lot of people do with ants.
Why would our ambitions and existence be even remotely important to an intelligence so beyond us that it considered us nothing but a pest?
We're not talking about anything going on a killing spree. We're not talking about robots remember.
Right. OK. And what's it going to do? How would it do it? Why would we give it the means to do it?
If we're not talking about robots, then what are we talking about? What can it do?
Posts: 13122
Threads: 130
Joined: October 18, 2014
Reputation:
55
RE: Can we build AI without losing control over it? | Sam Harris
November 4, 2016 at 6:17 pm
(November 4, 2016 at 6:12 pm)Rhythm Wrote: The machine that designs machines.
So we're moving from the AI into Terminator land by now. Machines setting up their own factories to create yet more advanced machines, developing their own technology at their leisure while we're standing idly by, watch all of the planet's energy going into the ever increasing demands of the machine race to be threatened by them.
Well, stupid me. How could I fail to consider such simple and credible a scenario.
Posts: 43162
Threads: 720
Joined: September 21, 2008
Reputation:
133
RE: Can we build AI without losing control over it? | Sam Harris
November 4, 2016 at 6:18 pm
(November 4, 2016 at 6:14 pm)Mathilda Wrote: It is critically important. These are limitations upon the AI. Computers are rigid and fragile. natural intelligence and bodies are flexible and robust. They also heal.
But you're talking about modern computers. We're talking about a future where we'd have more power and more powerful computers.
Quote:Any intelligence, whether artificial or natural, can only be evaluated in a context or particular environment. We see dogs as stupid compared to humans in a human environment, yet they can smell a far greater range than we can. Enough to sniff out drugs in luggage or to sense the onset of a seizure. A dog or cat would survive an apocalypse far better than say a chess grandmaster.
Yes but sense of smell and ability to survive isn't what intelligence is.
Posts: 13122
Threads: 130
Joined: October 18, 2014
Reputation:
55
RE: Can we build AI without losing control over it? | Sam Harris
November 4, 2016 at 6:19 pm
(This post was last modified: November 4, 2016 at 6:21 pm by abaris.)
(November 4, 2016 at 6:15 pm)Alasdair Ham Wrote: @ Abaris
I'm not saying we intrinsically have to assume his premises. I'm saying we have to assume his premises in order for his conclusion to follow from his premises.
Sure, but why should we do that without reflecting on the scenario?
(November 4, 2016 at 6:15 pm)Alasdair Ham Wrote: Why would there be a limit to it any more than there's a limit to other ways computers are improving?
Because we put that shit together. And an apple G4 from the year 2000 isn't able to do the same things my current rig can do. My old G4, which I gave to charity, didn't evolve by itself, sadly.
Posts: 43162
Threads: 720
Joined: September 21, 2008
Reputation:
133
RE: Can we build AI without losing control over it? | Sam Harris
November 4, 2016 at 6:20 pm
(November 4, 2016 at 6:17 pm)abaris Wrote: (November 4, 2016 at 6:12 pm)Rhythm Wrote: The machine that designs machines.
So we're moving from the AI into Terminator land by now. Machines setting up their own factories to create yet more advanced machines, developing their own technology at their leisure while we're standing idly by, watch all of the planet's energy going into the ever increasing demands of the machine race to be threatened by them.
Well, stupid me. How could I fail to consider such simple and credible a scenario.
How is machines that design machines "terminator land"?
Terminator land would be robots that try to take over the world and exterminate humanity. That's a far cry from self-correcting machines that help correct other machines.
It's another strawman.
Posts: 13122
Threads: 130
Joined: October 18, 2014
Reputation:
55
RE: Can we build AI without losing control over it? | Sam Harris
November 4, 2016 at 6:25 pm
(November 4, 2016 at 6:20 pm)Alasdair Ham Wrote: How is machines that design machines "terminator land"?
Terminator land would be robots that try to take over the world and exterminate humanity. That's a far cry from self-correcting machines that help correct other machines.
It's another strawman.
So, it's a strawman. I don't know what I should call not being able to answer the simplest questions about processing power, technology and energy consumption then. Or the we have to assume for Harris' scenario to work.
So tell me, what threat can machines creating machines (energy consumption, wink, wink) possibly pose if they aren't out to subdue or terminate us?
|