Our server costs ~$56 per month to run. Please consider donating or becoming a Patron to help keep the site running. Help us gain new members by following us on Twitter and liking our page on Facebook!
Current time: March 29, 2024, 2:53 am

Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Elon and AI
#41
RE: Elon and AI
(May 30, 2019 at 2:34 pm)Drich Wrote:
(May 30, 2019 at 2:22 pm)Mathilda Wrote: Neither of whom have actively researched AI.

Experts have explained to Steven Hawking why he was wrong.

AGAIN SAYS YOU AND YOU ALONE!!! Are we just to take your word? you have done nothing to provide any evidence or proof to anything you have said!

The superhero of artificial intelligence: can this genius keep it in check?

Quote:Stephen Hawking is cited as an encouraging example of what such “getting up-to-speed” can mean. The two recently met in Cambridge for a private conversation instigated by Hassabis. “It was obviously a fantastic honour just meeting him,” he enthuses, pulling out his iPhone – he remains a devotee, despite his new paymasters – to show me a selfie. “We only had an hour scheduled, but he had so many questions we ended up talking for four hours. He missed lunch, so his minders were not very happy with me.”

Since their meeting, Hassabis points out, Hawking has not mentioned “anything inflammatory about AI” in the press; most surprisingly, in his BBC Reith lectures last month, he did not include artificial intelligence in his list of putative threats to humanity. “Maybe it helped, hearing more about the practicalities; more about the actual systems we might build and the checks and controls we can have on those,” Hassabis ventures.


Oh, and just to add something about about Demis Hassabi, the expert who corrected Stephen Hawking ...

Lunch with the FT: Demis Hassabis

Quote:I ask him why he chose to sell his company to Google. DeepMind had plenty of money in the bank, including funding from Peter Thiel, the first major backer of Facebook, and Elon Musk, who leads the commercial space flight group SpaceX.
Reply
#42
RE: Elon and AI
(May 30, 2019 at 3:34 pm)Mathilda Wrote:
(May 30, 2019 at 2:34 pm)Drich Wrote: AGAIN SAYS YOU AND YOU ALONE!!! Are we just to take your word? you have done nothing to provide any evidence or proof to anything you have said!

The superhero of artificial intelligence: can this genius keep it in check?

Quote:Stephen Hawking is cited as an encouraging example of what such “getting up-to-speed” can mean. The two recently met in Cambridge for a private conversation instigated by Hassabis. “It was obviously a fantastic honour just meeting him,” he enthuses, pulling out his iPhone – he remains a devotee, despite his new paymasters – to show me a selfie. “We only had an hour scheduled, but he had so many questions we ended up talking for four hours. He missed lunch, so his minders were not very happy with me.”

Since their meeting, Hassabis points out, Hawking has not mentioned “anything inflammatory about AI” in the press; most surprisingly, in his BBC Reith lectures last month, he did not include artificial intelligence in his list of putative threats to humanity. “Maybe it helped, hearing more about the practicalities; more about the actual systems we might build and the checks and controls we can have on those,” Hassabis ventures.


Oh, and just to add something about about Demis Hassabi, the expert who corrected Stephen Hawking ...

Lunch with the FT: Demis Hassabis

Quote:I ask him why he chose to sell his company to Google. DeepMind had plenty of money in the bank, including funding from Peter Thiel, the first major backer of Facebook, and Elon Musk, who leads the commercial space flight group SpaceX.
Just proof that smart people can become overconfident in their own intelligence

(May 30, 2019 at 3:34 pm)Mathilda Wrote:
(May 30, 2019 at 2:34 pm)Drich Wrote: AGAIN SAYS YOU AND YOU ALONE!!! Are we just to take your word? you have done nothing to provide any evidence or proof to anything you have said!

The superhero of artificial intelligence: can this genius keep it in check?

Quote:Stephen Hawking is cited as an encouraging example of what such “getting up-to-speed” can mean. The two recently met in Cambridge for a private conversation instigated by Hassabis. “It was obviously a fantastic honour just meeting him,” he enthuses, pulling out his iPhone – he remains a devotee, despite his new paymasters – to show me a selfie. “We only had an hour scheduled, but he had so many questions we ended up talking for four hours. He missed lunch, so his minders were not very happy with me.”

Since their meeting, Hassabis points out, Hawking has not mentioned “anything inflammatory about AI” in the press; most surprisingly, in his BBC Reith lectures last month, he did not include artificial intelligence in his list of putative threats to humanity. “Maybe it helped, hearing more about the practicalities; more about the actual systems we might build and the checks and controls we can have on those,” Hassabis ventures.


Oh, and just to add something about about Demis Hassabi, the expert who corrected Stephen Hawking ...

Lunch with the FT: Demis Hassabis

Quote:I ask him why he chose to sell his company to Google. DeepMind had plenty of money in the bank, including funding from Peter Thiel, the first major backer of Facebook, and Elon Musk, who leads the commercial space flight group SpaceX.
Musk has also been criticized by AI experts 

https://www.valuewalk.com/2018/05/elon-m...elligence/

https://www.roboticsbusinessreview.com/s..._about_ai/

https://techcrunch.com/2017/07/19/this-f...stands-ai/

Not to mention  Mark Zuckerberg and  Eric Schmidt also say he's wrong
Seek strength, not to be greater than my brother, but to fight my greatest enemy -- myself.

Inuit Proverb

Reply
#43
RE: Elon and AI
It's not that I think so highly of myself to easily dismiss Elon Musk in this way, as Aegon suggested. I would happily listen to anyone with any kind of experience, even an MSc student if they were trying to do something novel. Or even philosophers who work as part of a team developing AI because they see first hand what works and what doesn't and are normally the ones most widely read of the literature (very useful to have around if like me you're someone more interested in developing than reading). I have had very interesting conversations with people who don't have any formal qualifications in the field. What is important to me is if they have real experience or not in trying to get it to work.

But I have absolutely no time any more for the Elon Musks of this world. Even the media scientists such as Ray Kurzweil and Kevin Warwick who are more interested in promoting themselves as personalities.

The field of AI is still really young despite going 60 years. The pattern is to hype it up and say that human level intelligence is just around the corner and then an AI winter sets in after it's found that the problem is much harder than it looks. Think about this. People were saying this because they were in awe of the kind of processing power you now find in your washing machine.

Normally any success that AI has branches off and becomes a field in its own right. This has led some people to argue that AI will never happen because it's always going to be blue sky research looking for the next success story. Machine learning is the first real commercial success though and what is different this time is that the narrow AI we see now hasn't been called something else (yet). Although it should and probably will get called something other than AI once it becomes an engineering discipline that can be replicated. And Machine Learning only became commercially viable to the extent that it has because of the growth of the computing and the Internet collecting huge amounts of data, and graphics cards giving us the processing power we need. But this kind of AI is very different to what is required for strong AI, or Artificial General Intelligence. But it is a useful tool that the latter will require.

The field of AI has always been one of very few opportunities. There have been loads of really impressive ideas out there which have gone nowhere. Not because they weren't promising or hadn't demonstrated their worth but through lack of funding and opportunity. And once you move away from machine learning that still applies. There are many different subdisciplines in AI that don't get the attention they deserve.
Reply
#44
RE: Elon and AI
(May 30, 2019 at 11:49 am)Mathilda Wrote:
(May 30, 2019 at 11:20 am)Aegon Wrote: You could argue that he's not as educated on the topic as some, including yourself, and others who are personally working on AI itself. But I'm laughing at your wording. He knows "fuck all"? He's one of the brightest minds on the planet who is CEO of a company that revolves around AI and has the most advanced consumer-accessible AI vehicles on the market, and are constantly leaping towards advancing artificial intelligence in our lives. I don't know what Drich is referring to, but it's most likely another legitimate addition to this defense of his knowledge. You think he'd be able to reach this point if he knew fuck all about one of the main aspects of the business that keeps him famous?

I mean come on... you really think so highly of yourself that you feel comfortable putting down Elon Musk in such a flippant manner? I'm not just going to believe you when you say it like that. It feels like I'm on /r/iamverysmart.

No. It's a sign of someone utterly fucking fed up to the back teeth of people like Elon Musk, Sam Harris, Nick Bostrom, and even Steven Hawking who pretend that they have some knowledge of AI and talk about it with no real understanding of the fundamental principles behind it. They don't understand the basic history of AI research. These are futurists, philosophers and entrepreneurs (always men) that have never actually worked on strong AI (which is what Elon Musk is referring to when he talks about it in the future).  These are people who are good at selling themselves while people who do the actual work themselves get ignored.

I can understand that. Plenty of very smart people overstep their boundaries in terms of expertise. However I feel as though Musk is a bit more connected to the whole sphere of AI than anyone else you listed, considering the businesses he operates.

Quote:The very nature of strong AI is that your theory gets blown out the water the moment you try putting it into practice. Any one who works at the coal face finds this out very quickly if they are doing anything novel.

AI is not like a natural science where you have something that already exists that you can study.

AI is not like conventional engineering. When you engineer something you know what constraints you have to fulfill. You know the requirements. The essential problem of AI comes down to, how do you solve a whole range of problems when you don't actually know what those problems are going to be? Because if you knew this, you wouldn't need an intelligent solution.

What do you mean by this? What sort of problems are we looking to solve that aren't actually problems yet? Is there an example? It seems to me there are many problems we know exist, or can be sure will exist in the next 5 years or so, that AI can assist with. The public health sphere comes to mind.

Also, what's your take on Andrew Yang and his fear that AI will kill so many jobs we will need a guaranteed basic income provided by the government?

Quote:The challenge of AI comes down to making it scale. Not recognising this has tripped up thousands of projects, ruined careers and closed start ups. People like Elon Musk and the futurists I described above extrapolate the curve and assume progress is constant. It never is. The current paradigm is already hitting the limits of how it can scale. These people mistakenly make assumptions that anyone who does practical work in the area knows from painful experience are utterly flawed.

The reason why people can get away with making such bold and flawed predictions is because we have made so little progress in strong AI. It's not like we can say ah, this is how to do it. All there generally is are real researchers who know what doesn't work and no one listens to them because that's not exciting.

Are you really that pessimistic about the advanced AI could make in the next 10, 20, 50, etc years? Look at how technology has transformed our lives in the last 10 years alone; we are reliant on it entirely. It's a staple of our society now. In developed, affluent countries, it's how we communicate, how we entertain ourselves, how we learn information... it's taken over every aspect of our lives, and it's because we figured out how to put a whole-ass computer in our pockets. Homo sapiens are officially radically changed because of it. You don't see AI making similar strides? Does more intensely advanced technology not demand AI progression?

Quote:Elon Musk one of the brightest minds on the planet? You think he personally designed the SpaceX rockets, Tesla cars and AI? No. He hired people to do that for him. His talent is in spotting opportunity and hiring talent. He is not Tony Stark.

He doesn't have to personally design the rockets or cars to be one of the brightest minds on the planet. But he's working 18 hour days and sleeping in his factory office. What's he doing? Tweeting? Thinking of business strategy? He has significant involvement in the actual engineering and science of it. You really don't think he has a greater understanding of AI, as CEO of a company whose product revolves around utiliation of AI, than the average joe? I feel like you're exaggerating his lack of knowledge.

I don't have a problem with the idea that he might be "leading on" the public with his unrealistic expectations of AI technology, or that he might be wrong about some things. My issue is your bravado, as if the guy is completely clueless. He is plainly, obviously not.
[Image: nL4L1haz_Qo04rZMFtdpyd1OZgZf9NSnR9-7hAWT...dc2a24480e]
Reply
#45
RE: Elon and AI
(May 31, 2019 at 11:41 am)Aegon Wrote: I can understand that. Plenty of very smart people overstep their boundaries in terms of expertise. However I feel as though Musk is a bit more connected to the whole sphere of AI than anyone else you listed, considering the businesses he operates.

Which businesses? Tesla? SpaceX? The Boring company? OpenAI which he co-founded in 2016 and has since quit?

He has had some investment and managerial experience setting up an AI company. That does not make him an AI expert.

In the same way this doesn't make David Cameron an expert in AI ...

David Cameron takes job with US artificial intelligence firm


(May 31, 2019 at 11:41 am)Aegon Wrote:
(May 30, 2019 at 11:49 am)Mathilda Wrote: AI is not like conventional engineering. When you engineer something you know what constraints you have to fulfill. You know the requirements. The essential problem of AI comes down to, how do you solve a whole range of problems when you don't actually know what those problems are going to be? Because if you knew this, you wouldn't need an intelligent solution.

What do you mean by this? What sort of problems are we looking to solve that aren't actually problems yet? Is there an example? It seems to me there are many problems we know exist, or can be sure will exist in the next 5 years or so, that AI can assist with. The public health sphere comes to mind.


I was trying to avoid technical jargon talking about an agent's environment etc. Computers are very fast idiots that need everything made explicit because it all boils down to binary calculations. The whole purpose of AI is to make computers more autonomous.

Imagine you run a business and you have one particularly stupid employee. He requires constant micromanaging. Every task you give him, you have to give explicit step-by-step instructions. But once you do that he can do it really well without error. That's conventional computing.

But what you really want is to have an intelligent employee who you can give high level commands to and they are empowered to figure out how to get the job done and then do it. That's the aim of AI.

What this means in practice though is that your employee is going to come across problems that you don't know about and solve them for himself. That's the whole point of being autonomous.

The challenge of AI though is that we have to use explicit instructions that can cope with unknown environments, tasks or problems. For example, a vacuum cleaning robot or a search & rescue drone can't require constant micromanaging if they're going to be useful as otherwise you might as well humans doing the task. But they do need to adapt to changing circumstances and how do you as a engineer design it to adapt in the way that you want to if you don't know what those environments, circumstances or tasks are going to be in advance?

This is why the AI we have is called narrow AI. Because they are designed for one specific task it cannot generalise to entirely new problem domains. For example, we may be impressed by Alpha Go able to beat a human at Go (and it is impressive). But that same program couldn't then play Chess against the same player. Or if you changed the rules of Go on the fly then the AI would fail dismally.

This is a major problem that machine learning really fails at. This is the state of AI now.



(May 31, 2019 at 11:41 am)Aegon Wrote: Also, what's your take on Andrew Yang and his fear that AI will kill so many jobs we will need a guaranteed basic income provided by the government?

This is exactly the kind of things we should be talking about with regard to AI. There are many, many problems like this that the actual AI researchers are very aware of yet people like Elon Musk ignore while talking about killer robots and existential threats. Being cynical about it, and I have good reason to be, there is plenty of reason to suggest that this is a deliberate ploy. If people are worried about sci-fi inspired tropes that won't ever happen, they're less likely to worry about how their jobs are being replaced by automation (which doesn't automatically require AI), or that these systems encode society's prejudices and reinforces them, or that decisions that affect our lives can't be explained because the computer said so, or how we determine who is responsible when AI goes wrong. This means corporations increase their profits with little scrutiny until it's too late.

These were the points I made when writing off to MSPs in the Scottish parliament because of a motion raised about existential threats (Elon's Musk's name was specifically mentioned).



(May 31, 2019 at 11:41 am)Aegon Wrote: Are you really that pessimistic about the advanced AI could make in the next 10, 20, 50, etc years?

Yes.


(May 31, 2019 at 11:41 am)Aegon Wrote: Look at how technology has transformed our lives in the last 10 years alone; we are reliant on it entirely. It's a staple of our society now. In developed, affluent countries, it's how we communicate, how we entertain ourselves, how we learn information... it's taken over every aspect of our lives, and it's because we figured out how to put a whole-ass computer in our pockets. Homo sapiens are officially radically changed because of it. You don't see AI making similar strides? Does more intensely advanced technology not demand AI progression?

None of which helps the development of strong AI. This is narrow AI created for a specific purpose that cannot generalise over different problem domains. It does not scale up.


(May 31, 2019 at 11:41 am)Aegon Wrote:
(May 30, 2019 at 11:49 am)Mathilda Wrote: Elon Musk one of the brightest minds on the planet? You think he personally designed the SpaceX rockets, Tesla cars and AI? No. He hired people to do that for him. His talent is in spotting opportunity and hiring talent. He is not Tony Stark.

He doesn't have to personally design the rockets or cars to be one of the brightest minds on the planet. But he's working 18 hour days and sleeping in his factory office. What's he doing? Tweeting? Thinking of business strategy? He has significant involvement in the actual engineering and science of it. You really don't think he has a greater understanding of AI, as CEO of a company whose product revolves around utiliation of AI, than the average joe? I feel like you're exaggerating his lack of knowledge.

I don't have a problem with the idea that he might be "leading on" the public with his unrealistic expectations of AI technology, or that he might be wrong about some things. My issue is your bravado, as if the guy is completely clueless. He is plainly, obviously not.

Exactly. He's working 18 hour days on his own companies, none of which are in AI (any more). Tesla itself is failing. Take Karl Friston. He has a similar workload but he devotes himself to brain sciences and he is someone who clearly has a lot of useful expertise relevant to the field of AI. But I wouldn't automatically say that he had the expertise to found and run a company.

Sure I am exaggerating by saying Elon Musk knows fuck all about AI. He clearly knows more than the average person in the street. Not as much as an MSc student though. If I spoke to a PhD student I could expect to learn something new and interesting. If I wanted to learn about the AI that a company he invested in is working on then I'd go to the lead scientist / engineer. I wouldn't ask Elon Musk.

Using your logic, politicians must be some of the brightest people on the planet because they have to know the basics of so many different aspects of society. In practice though, they get given a single A4 summary and the rest gets delegated to the appropriate experts. The same happens in business. Sure they have to be intelligent to be able to pick up and understand such topics very quickly, but that's a far cry from having any real experience in it.
Reply





Users browsing this thread: 1 Guest(s)