Our server costs ~$56 per month to run. Please consider donating or becoming a Patron to help keep the site running. Help us gain new members by following us on Twitter and liking our page on Facebook!
Current time: May 24, 2024, 5:19 pm

Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Can we build AI without losing control over it? | Sam Harris
RE: Can we build AI without losing control over it? | Sam Harris
HOW is irrelevant to IF it's going to happen because IF you accept his 3 premises you already accept THAT it is going to happen. All this speculation of how it's going to happen in practice is the same mistake people make when they say that there can't be objective answers to morality in principle just because we may not be able to find any in practice.
Reply
RE: Can we build AI without losing control over it? | Sam Harris
(November 4, 2016 at 6:31 pm)Mathilda Wrote:
(November 4, 2016 at 5:59 pm)Alasdair Ham Wrote: I think emotions get in the way sometimes. First you said his assumptions weren't sound. Then you said you didn't have a problem with the assumptions.

I said I didn't have a problem with his first assumption he stated. The second and third I did have problems with. But that wasn't even what I was referring to when I said that he was making assumptions. His whole talk was laden with unstated assumptions that he didn't even realise that he was making.

(November 4, 2016 at 5:59 pm)Alasdair Ham Wrote: Then you said he was assuming exponentiality. Then I pointed out that he said the exact opposite.

But he was still relying on exponentiality despite claiming not to. He made reference to the singularity. He didn't call it that specifically but that's what he was referring to when he talked about replacing a room full of people with a computer to do research.

I see what you mean.

So you're saying he laid out 3 explicit premises but then he had other premises that he was making implictly that he didn't make more explicit?
Reply
RE: Can we build AI without losing control over it? | Sam Harris
Meh, it;s simpler than all that, it;s imagining that tomorrows problem, because it won;t come until tommorrow, is not a problem.  Harris address this spcifically, as well...with the bit about recieving a translation from aliens that reads.  We are coming to your planet in 50 years, get ready.  And asking whether we just sit there counting the months until mothership arrives..or do we do some work.  Does this change if the message reads 500 years, 5000?  He proposes that we'd work with a sense of urgency and sober assessment that he doesn't see from ai researchers, and that the things they say to reassure us...parroted almost word for word even in this thread - as though the man could read minds and foretell the future..are less than reassuring.

Again, he didn;t rely on that premise. He used it, but also explaind why it wasn't critical to his argument. Now, to be blunt, I don;t afford you any more credibility than you would deny him. You don't have a crystal ball, you don't know what it take to make general ai (or even a human intellect...no one does), you don't know if it will take even -50- years. You don't, and pretending that you do is, again, a failure of reason already addressed by Harris, calling it for the non-sequitur it is..not a failure of expertise in field.

This, you've consistently ignored.
I am the Infantry. I am my country’s strength in war, her deterrent in peace. I am the heart of the fight… wherever, whenever. I carry America’s faith and honor against her enemies. I am the Queen of Battle. I am what my country expects me to be, the best trained Soldier in the world. In the race for victory, I am swift, determined, and courageous, armed with a fierce will to win. Never will I fail my country’s trust. Always I fight on…through the foe, to the objective, to triumph overall. If necessary, I will fight to my death. By my steadfast courage, I have won more than 200 years of freedom. I yield not to weakness, to hunger, to cowardice, to fatigue, to superior odds, For I am mentally tough, physically strong, and morally straight. I forsake not, my country, my mission, my comrades, my sacred duty. I am relentless. I am always there, now and forever. I AM THE INFANTRY! FOLLOW ME!
Reply
RE: Can we build AI without losing control over it? | Sam Harris
(November 4, 2016 at 6:05 pm)Alasdair Ham Wrote: Well no, that isn't part of his argument so it's not important.

We just need to assume that it's inevitable eventually.

No. We cannot make that assumption for the myriad of reasons I have already laid out.

It might happen. It very well might not.

Personally my best guess is as someone who develops strong AI is that we'll end up with the robot equivalent of animals for very specific environments. We've been using animals for a very long time now.


(November 4, 2016 at 6:05 pm)Alasdair Ham Wrote: AI is getting better and better and one day will be so much smarter than us that we will be insignificant to it.


OK I'm going to post here what I wrote on TTA. I was trying to avoid this but I really don't want to have to write it out again.


Quote:I absolutely do not think that AI research has reached a critical point, but I wouldn't blame anyone for thinking that it has. The problem is one of scalability and many companies and research projects have failed because people do not appreciate this pitfall. They create a prototype or some smart program that works in a very specialised case and then think that because it works for that then they can do something useful with it. But then they find that they can't scale up their AI.

It's called the curse of dimensionality and it doesn't just affect AI but any computer program where you need to take into account multiple real world variables. Say for example you have 10 different sensors on a fighter aircraft and want to predict when a component will fail. You could plot all this on a graph and try and analyse the hyperdimensional space. Add one more sensor though and the space that you need to work with grows exponentially. The travelling salesman problem is a classic example of this. Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city exactly once and returns to the origin city? Add too many cities and you very soon run out of enough computing time available in the history of the universe.

There is a reason why the human brain has so many neurons and such a high connectivity between each neuron.

So the problem is that people expect progress to follow exponential curve. It's taken this long to be able to achieve self driving cars what will happen in another five years? But what they aren't taking into account is that we reached this point because of the exponential doubling of processing power from Moore's law over many decades. Our understanding of intelligence hasn't progressed as fast, in fact nothing else has. And Moore's law is coming to an end.

Another aspect to the issue of scalability, is that each small progression in AI has large changes because it affects many people. So for example, there used to be whole call centres full of people that can now be replaced by an automated system. The automated system is extremely constrained but that doesn't matter much to the thousands of people looking for new jobs. Sure, talk about this, it's an issue of socio-economics rather than AI. What Sam Harris is talking about is science fantasy.
Reply
RE: Can we build AI without losing control over it? | Sam Harris
So like... if he's saying explicitly that it doesn't have to be exponential, even if he then implictly gives exponentiality as an example... he's already said that it's not required, and it isn't: if an AI keeps improving it doesn't have to be expotential, as he said it doesn't.

I mean... why focus on him implicitly giving expotentiality as an example if he already said explicitly that that is NOT condition?
Reply
RE: Can we build AI without losing control over it? | Sam Harris
(November 4, 2016 at 6:33 pm)Alasdair Ham Wrote: HOW is irrelevant to IF it's going to happen because IF you accept his 3 premises you already accept THAT it is going to happen. All this speculation of how it's going to happen in practice is the same mistake people make when they say that there can't be objective answers to morality in principle just because we may not be able to find any in practice.

No, it is relevant if one is talking about technology. A yet to developed AI is still a piece of technology. However advanced it needs hard- and software and energy to function. Three components we provide it with, based on our technological knowledge.

The premise is entirely void if it doesn't adress the basics and how these basics could be possibly overcome by a piece of intelligent design. A piece of intelligent design using our technology with all it's limitations. And the year this technology is invented is entirely irrelevant to the question, since going by our own standards, it will be old hat in another year or two.
[Image: Bumper+Sticker+-+Asheville+-+Praise+Dog3.JPG]
Reply
RE: Can we build AI without losing control over it? | Sam Harris
(November 4, 2016 at 6:38 pm)Mathilda Wrote:
(November 4, 2016 at 6:05 pm)Alasdair Ham Wrote: Well no, that isn't part of his argument so it's not important.

We just need to assume that it's inevitable eventually.

No. We cannot make that assumption for the myriad of reasons I have already laid out.

It might happen. It very well might not.

But that's not what I'm saying Mathilda. I'm not saying that we have to make that assumption full stop. I'm saying that we have to make that assumption for his conclusion to follow.
Reply
RE: Can we build AI without losing control over it? | Sam Harris
(November 4, 2016 at 6:41 pm)abaris Wrote:
(November 4, 2016 at 6:33 pm)Alasdair Ham Wrote: HOW is irrelevant to IF it's going to happen because IF you accept his 3 premises you already accept THAT it is going to happen. All this speculation of how it's going to happen in practice is the same mistake people make when they say that there can't be objective answers to morality in principle just because we may not be able to find any in practice.

No, it is relevant if one is talking about technology.[...]

How it's going to happen is irrelevant to if it's going to happen if you've already accepted that it's going to happen and if that it is going to happen is his only point and if how it's going to happen is not his point.
Reply
RE: Can we build AI without losing control over it? | Sam Harris
(November 4, 2016 at 6:45 pm)Alasdair Ham Wrote: How it's going to happen is irrelevant to if it's going to happen if you've already accepted that it's going to happen and if that it is going to happen is his only point and if how it's going to happen is not his point.

So I ask again - you make the point of love it or leave it?

Otherwise I'm not accepting that it's going to happen for reasons I tried to give. Very basic reasons he doesn't even adress in one word.
[Image: Bumper+Sticker+-+Asheville+-+Praise+Dog3.JPG]
Reply
RE: Can we build AI without losing control over it? | Sam Harris
(November 4, 2016 at 6:38 pm)Mathilda Wrote: No. We cannot make that assumption for the myriad of reasons I have already laid out.

It might happen. It very well might not.

Personally my best guess is as someone who develops strong AI is that we'll end up with the robot equivalent of animals for very specific environments. We've been using animals for a very long time now.
Which he -also- addressed.  Referencing the incredibly longshot that we hit upon the perfect ai design the first go round, with no safety concerns inherent -to- the design.  It becomes a human concern.  He aptly called it an oracle, and described how possession of such an oracle under our current political and economic system would or could be disastrous for reasons not at all related to the design itself.  Yes, we have been using animals for some time, do you need a list of the deleterious ways we've used them....?



Quote:I absolutely do not think that AI research has reached a critical point, but I wouldn't blame anyone for thinking that it has. The problem is one of scalability and many companies and research projects have failed because people do not appreciate this pitfall. They create a prototype or some smart program that works in a very specialised case and then think that because it works for that then they can do something useful with it. But then they find that they can't scale up their AI.

It's called the curse of dimensionality and it doesn't just affect AI but any computer program where you need to take into account multiple real world variables. Say for example you have 10 different sensors on a fighter aircraft and want to predict when a component will fail. You could plot all this on a graph and try and analyse the hyperdimensional space. Add one more sensor though and the space that you need to work with grows exponentially. The travelling salesman problem is a classic example of this. Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city exactly once and returns to the origin city? Add too many cities and you very soon run out of enough computing time available in the history of the universe.
Here;s that bizarre objection again.  Granted, some things may not be computable on grounds of time..but is intelligence one of them...because, as I;ve already mentioned...it seems like an unguided and unoptimized series of trail and error happened upon intelligence in a lot less time than all of the time in the universe.  You have an example between your ears.

Quote:There is a reason why the human brain has so many neurons and such a high connectivity between each neuron.
That reason might actually be hereditary inefficiency.  

Quote:So the problem is that people expect progress to follow exponential curve. It's taken this long to be able to achieve self driving cars what will happen in another five years? But what they aren't taking into account is that we reached this point because of the exponential doubling of processing power from Moore's law over many decades. Our understanding of intelligence hasn't progressed as fast, in fact nothing else has. And Moore's law is coming to an end.
again, preempted.  He specifically stated both that and why he doesn;t require moores law to continue, or exponential growth.  Progress, just progress, at any rate, eventually get us there.  

Quote:Another aspect to the issue of scalability, is that each small progression in AI has large changes because it affects many people. So for example, there used to be whole call centres full of people that can now be replaced by an automated system. The automated system is extremely constrained but that doesn't matter much to the thousands of people looking for new jobs. Sure, talk about this, it's an issue of socio-economics rather than AI. What Sam Harris is talking about is science fantasy.

Strange, because he -did- reference the socioeconomic impact of hypothetical ai........you just keep parroting his points, and mouthing objections he goes to the trouble of working out...all the while, claiming he doesn't address this or that, makes assumptions he explicitly disavows /w reasoning to support....clearly and ardently believing that you both disagree with the man, and that you're correcting his mistakes......
I am the Infantry. I am my country’s strength in war, her deterrent in peace. I am the heart of the fight… wherever, whenever. I carry America’s faith and honor against her enemies. I am the Queen of Battle. I am what my country expects me to be, the best trained Soldier in the world. In the race for victory, I am swift, determined, and courageous, armed with a fierce will to win. Never will I fail my country’s trust. Always I fight on…through the foe, to the objective, to triumph overall. If necessary, I will fight to my death. By my steadfast courage, I have won more than 200 years of freedom. I yield not to weakness, to hunger, to cowardice, to fatigue, to superior odds, For I am mentally tough, physically strong, and morally straight. I forsake not, my country, my mission, my comrades, my sacred duty. I am relentless. I am always there, now and forever. I AM THE INFANTRY! FOLLOW ME!
Reply



Possibly Related Threads...
Thread Author Replies Views Last Post
  Pastors losing faith (Vice) Fake Messiah 1 240 January 14, 2019 at 8:18 pm
Last Post: bennyboy
  Sam Harris podcast, blog, etc. Fake Messiah 2 997 September 30, 2015 at 3:06 am
Last Post: ApeNotKillApe
  Do you want to build a snowman? Foxaèr 9 1729 December 26, 2014 at 4:15 am
Last Post: BrianSoddingBoru4
  Sam Harris at the Global Atheist Convention Justtristo 22 10986 August 10, 2012 at 10:15 am
Last Post: Justtristo
  Universe Without Design Xerxes 0 1191 May 4, 2012 at 3:40 am
Last Post: Xerxes
  Doing Good...Without God Forsaken 0 749 April 10, 2012 at 5:26 am
Last Post: Forsaken
  The End of Faith by Sam Harris Justtristo 1 1591 May 28, 2011 at 1:47 pm
Last Post: Zenith
  Glenn Beck facing sack after losing over a million viewers downbeatplumb 12 5071 March 9, 2011 at 1:12 am
Last Post: Ubermensch
Rainbow Doctors without borders charity event and auction. leo-rcc 2 2023 September 13, 2010 at 7:01 pm
Last Post: DeistPaladin
  Sam Harris: Science can answer moral questions Edwardo Piet 10 3702 July 22, 2010 at 3:14 am
Last Post: leo-rcc



Users browsing this thread: 1 Guest(s)