HOW is irrelevant to IF it's going to happen because IF you accept his 3 premises you already accept THAT it is going to happen. All this speculation of how it's going to happen in practice is the same mistake people make when they say that there can't be objective answers to morality in principle just because we may not be able to find any in practice.
Our server costs ~$56 per month to run. Please consider donating or becoming a Patron to help keep the site running. Help us gain new members by following us on Twitter and liking our page on Facebook!
Current time: January 24, 2025, 5:46 am
Thread Rating:
Can we build AI without losing control over it? | Sam Harris
|
(November 4, 2016 at 6:31 pm)Mathilda Wrote:(November 4, 2016 at 5:59 pm)Alasdair Ham Wrote: I think emotions get in the way sometimes. First you said his assumptions weren't sound. Then you said you didn't have a problem with the assumptions. I see what you mean. So you're saying he laid out 3 explicit premises but then he had other premises that he was making implictly that he didn't make more explicit? RE: Can we build AI without losing control over it? | Sam Harris
November 4, 2016 at 6:37 pm
(This post was last modified: November 4, 2016 at 6:41 pm by The Grand Nudger.)
Meh, it;s simpler than all that, it;s imagining that tomorrows problem, because it won;t come until tommorrow, is not a problem. Harris address this spcifically, as well...with the bit about recieving a translation from aliens that reads. We are coming to your planet in 50 years, get ready. And asking whether we just sit there counting the months until mothership arrives..or do we do some work. Does this change if the message reads 500 years, 5000? He proposes that we'd work with a sense of urgency and sober assessment that he doesn't see from ai researchers, and that the things they say to reassure us...parroted almost word for word even in this thread - as though the man could read minds and foretell the future..are less than reassuring.
Again, he didn;t rely on that premise. He used it, but also explaind why it wasn't critical to his argument. Now, to be blunt, I don;t afford you any more credibility than you would deny him. You don't have a crystal ball, you don't know what it take to make general ai (or even a human intellect...no one does), you don't know if it will take even -50- years. You don't, and pretending that you do is, again, a failure of reason already addressed by Harris, calling it for the non-sequitur it is..not a failure of expertise in field. This, you've consistently ignored.
I am the Infantry. I am my country’s strength in war, her deterrent in peace. I am the heart of the fight… wherever, whenever. I carry America’s faith and honor against her enemies. I am the Queen of Battle. I am what my country expects me to be, the best trained Soldier in the world. In the race for victory, I am swift, determined, and courageous, armed with a fierce will to win. Never will I fail my country’s trust. Always I fight on…through the foe, to the objective, to triumph overall. If necessary, I will fight to my death. By my steadfast courage, I have won more than 200 years of freedom. I yield not to weakness, to hunger, to cowardice, to fatigue, to superior odds, For I am mentally tough, physically strong, and morally straight. I forsake not, my country, my mission, my comrades, my sacred duty. I am relentless. I am always there, now and forever. I AM THE INFANTRY! FOLLOW ME!
(November 4, 2016 at 6:05 pm)Alasdair Ham Wrote: Well no, that isn't part of his argument so it's not important. No. We cannot make that assumption for the myriad of reasons I have already laid out. It might happen. It very well might not. Personally my best guess is as someone who develops strong AI is that we'll end up with the robot equivalent of animals for very specific environments. We've been using animals for a very long time now. (November 4, 2016 at 6:05 pm)Alasdair Ham Wrote: AI is getting better and better and one day will be so much smarter than us that we will be insignificant to it. OK I'm going to post here what I wrote on TTA. I was trying to avoid this but I really don't want to have to write it out again. Quote:I absolutely do not think that AI research has reached a critical point, but I wouldn't blame anyone for thinking that it has. The problem is one of scalability and many companies and research projects have failed because people do not appreciate this pitfall. They create a prototype or some smart program that works in a very specialised case and then think that because it works for that then they can do something useful with it. But then they find that they can't scale up their AI. RE: Can we build AI without losing control over it? | Sam Harris
November 4, 2016 at 6:39 pm
(This post was last modified: November 4, 2016 at 6:39 pm by Edwardo Piet.)
So like... if he's saying explicitly that it doesn't have to be exponential, even if he then implictly gives exponentiality as an example... he's already said that it's not required, and it isn't: if an AI keeps improving it doesn't have to be expotential, as he said it doesn't.
I mean... why focus on him implicitly giving expotentiality as an example if he already said explicitly that that is NOT condition? (November 4, 2016 at 6:33 pm)Alasdair Ham Wrote: HOW is irrelevant to IF it's going to happen because IF you accept his 3 premises you already accept THAT it is going to happen. All this speculation of how it's going to happen in practice is the same mistake people make when they say that there can't be objective answers to morality in principle just because we may not be able to find any in practice. No, it is relevant if one is talking about technology. A yet to developed AI is still a piece of technology. However advanced it needs hard- and software and energy to function. Three components we provide it with, based on our technological knowledge. The premise is entirely void if it doesn't adress the basics and how these basics could be possibly overcome by a piece of intelligent design. A piece of intelligent design using our technology with all it's limitations. And the year this technology is invented is entirely irrelevant to the question, since going by our own standards, it will be old hat in another year or two. (November 4, 2016 at 6:38 pm)Mathilda Wrote:(November 4, 2016 at 6:05 pm)Alasdair Ham Wrote: Well no, that isn't part of his argument so it's not important. But that's not what I'm saying Mathilda. I'm not saying that we have to make that assumption full stop. I'm saying that we have to make that assumption for his conclusion to follow. (November 4, 2016 at 6:41 pm)abaris Wrote:(November 4, 2016 at 6:33 pm)Alasdair Ham Wrote: HOW is irrelevant to IF it's going to happen because IF you accept his 3 premises you already accept THAT it is going to happen. All this speculation of how it's going to happen in practice is the same mistake people make when they say that there can't be objective answers to morality in principle just because we may not be able to find any in practice. How it's going to happen is irrelevant to if it's going to happen if you've already accepted that it's going to happen and if that it is going to happen is his only point and if how it's going to happen is not his point. (November 4, 2016 at 6:45 pm)Alasdair Ham Wrote: How it's going to happen is irrelevant to if it's going to happen if you've already accepted that it's going to happen and if that it is going to happen is his only point and if how it's going to happen is not his point. So I ask again - you make the point of love it or leave it? Otherwise I'm not accepting that it's going to happen for reasons I tried to give. Very basic reasons he doesn't even adress in one word. RE: Can we build AI without losing control over it? | Sam Harris
November 4, 2016 at 6:49 pm
(This post was last modified: November 4, 2016 at 6:51 pm by The Grand Nudger.)
(November 4, 2016 at 6:38 pm)Mathilda Wrote: No. We cannot make that assumption for the myriad of reasons I have already laid out.Which he -also- addressed. Referencing the incredibly longshot that we hit upon the perfect ai design the first go round, with no safety concerns inherent -to- the design. It becomes a human concern. He aptly called it an oracle, and described how possession of such an oracle under our current political and economic system would or could be disastrous for reasons not at all related to the design itself. Yes, we have been using animals for some time, do you need a list of the deleterious ways we've used them....? Quote:I absolutely do not think that AI research has reached a critical point, but I wouldn't blame anyone for thinking that it has. The problem is one of scalability and many companies and research projects have failed because people do not appreciate this pitfall. They create a prototype or some smart program that works in a very specialised case and then think that because it works for that then they can do something useful with it. But then they find that they can't scale up their AI.Here;s that bizarre objection again. Granted, some things may not be computable on grounds of time..but is intelligence one of them...because, as I;ve already mentioned...it seems like an unguided and unoptimized series of trail and error happened upon intelligence in a lot less time than all of the time in the universe. You have an example between your ears. Quote:There is a reason why the human brain has so many neurons and such a high connectivity between each neuron.That reason might actually be hereditary inefficiency. Quote:So the problem is that people expect progress to follow exponential curve. It's taken this long to be able to achieve self driving cars what will happen in another five years? But what they aren't taking into account is that we reached this point because of the exponential doubling of processing power from Moore's law over many decades. Our understanding of intelligence hasn't progressed as fast, in fact nothing else has. And Moore's law is coming to an end.again, preempted. He specifically stated both that and why he doesn;t require moores law to continue, or exponential growth. Progress, just progress, at any rate, eventually get us there. Quote:Another aspect to the issue of scalability, is that each small progression in AI has large changes because it affects many people. So for example, there used to be whole call centres full of people that can now be replaced by an automated system. The automated system is extremely constrained but that doesn't matter much to the thousands of people looking for new jobs. Sure, talk about this, it's an issue of socio-economics rather than AI. What Sam Harris is talking about is science fantasy. Strange, because he -did- reference the socioeconomic impact of hypothetical ai........you just keep parroting his points, and mouthing objections he goes to the trouble of working out...all the while, claiming he doesn't address this or that, makes assumptions he explicitly disavows /w reasoning to support....clearly and ardently believing that you both disagree with the man, and that you're correcting his mistakes......
I am the Infantry. I am my country’s strength in war, her deterrent in peace. I am the heart of the fight… wherever, whenever. I carry America’s faith and honor against her enemies. I am the Queen of Battle. I am what my country expects me to be, the best trained Soldier in the world. In the race for victory, I am swift, determined, and courageous, armed with a fierce will to win. Never will I fail my country’s trust. Always I fight on…through the foe, to the objective, to triumph overall. If necessary, I will fight to my death. By my steadfast courage, I have won more than 200 years of freedom. I yield not to weakness, to hunger, to cowardice, to fatigue, to superior odds, For I am mentally tough, physically strong, and morally straight. I forsake not, my country, my mission, my comrades, my sacred duty. I am relentless. I am always there, now and forever. I AM THE INFANTRY! FOLLOW ME!
|
« Next Oldest | Next Newest »
|
Users browsing this thread: 21 Guest(s)