RE: Elon and AI
May 31, 2019 at 1:40 pm
(This post was last modified: May 31, 2019 at 1:43 pm by I_am_not_mafia.)
(May 31, 2019 at 11:41 am)Aegon Wrote: I can understand that. Plenty of very smart people overstep their boundaries in terms of expertise. However I feel as though Musk is a bit more connected to the whole sphere of AI than anyone else you listed, considering the businesses he operates.
Which businesses? Tesla? SpaceX? The Boring company? OpenAI which he co-founded in 2016 and has since quit?
He has had some investment and managerial experience setting up an AI company. That does not make him an AI expert.
In the same way this doesn't make David Cameron an expert in AI ...
David Cameron takes job with US artificial intelligence firm
(May 31, 2019 at 11:41 am)Aegon Wrote:(May 30, 2019 at 11:49 am)Mathilda Wrote: AI is not like conventional engineering. When you engineer something you know what constraints you have to fulfill. You know the requirements. The essential problem of AI comes down to, how do you solve a whole range of problems when you don't actually know what those problems are going to be? Because if you knew this, you wouldn't need an intelligent solution.
What do you mean by this? What sort of problems are we looking to solve that aren't actually problems yet? Is there an example? It seems to me there are many problems we know exist, or can be sure will exist in the next 5 years or so, that AI can assist with. The public health sphere comes to mind.
I was trying to avoid technical jargon talking about an agent's environment etc. Computers are very fast idiots that need everything made explicit because it all boils down to binary calculations. The whole purpose of AI is to make computers more autonomous.
Imagine you run a business and you have one particularly stupid employee. He requires constant micromanaging. Every task you give him, you have to give explicit step-by-step instructions. But once you do that he can do it really well without error. That's conventional computing.
But what you really want is to have an intelligent employee who you can give high level commands to and they are empowered to figure out how to get the job done and then do it. That's the aim of AI.
What this means in practice though is that your employee is going to come across problems that you don't know about and solve them for himself. That's the whole point of being autonomous.
The challenge of AI though is that we have to use explicit instructions that can cope with unknown environments, tasks or problems. For example, a vacuum cleaning robot or a search & rescue drone can't require constant micromanaging if they're going to be useful as otherwise you might as well humans doing the task. But they do need to adapt to changing circumstances and how do you as a engineer design it to adapt in the way that you want to if you don't know what those environments, circumstances or tasks are going to be in advance?
This is why the AI we have is called narrow AI. Because they are designed for one specific task it cannot generalise to entirely new problem domains. For example, we may be impressed by Alpha Go able to beat a human at Go (and it is impressive). But that same program couldn't then play Chess against the same player. Or if you changed the rules of Go on the fly then the AI would fail dismally.
This is a major problem that machine learning really fails at. This is the state of AI now.
(May 31, 2019 at 11:41 am)Aegon Wrote: Also, what's your take on Andrew Yang and his fear that AI will kill so many jobs we will need a guaranteed basic income provided by the government?
This is exactly the kind of things we should be talking about with regard to AI. There are many, many problems like this that the actual AI researchers are very aware of yet people like Elon Musk ignore while talking about killer robots and existential threats. Being cynical about it, and I have good reason to be, there is plenty of reason to suggest that this is a deliberate ploy. If people are worried about sci-fi inspired tropes that won't ever happen, they're less likely to worry about how their jobs are being replaced by automation (which doesn't automatically require AI), or that these systems encode society's prejudices and reinforces them, or that decisions that affect our lives can't be explained because the computer said so, or how we determine who is responsible when AI goes wrong. This means corporations increase their profits with little scrutiny until it's too late.
These were the points I made when writing off to MSPs in the Scottish parliament because of a motion raised about existential threats (Elon's Musk's name was specifically mentioned).
(May 31, 2019 at 11:41 am)Aegon Wrote: Are you really that pessimistic about the advanced AI could make in the next 10, 20, 50, etc years?
Yes.
(May 31, 2019 at 11:41 am)Aegon Wrote: Look at how technology has transformed our lives in the last 10 years alone; we are reliant on it entirely. It's a staple of our society now. In developed, affluent countries, it's how we communicate, how we entertain ourselves, how we learn information... it's taken over every aspect of our lives, and it's because we figured out how to put a whole-ass computer in our pockets. Homo sapiens are officially radically changed because of it. You don't see AI making similar strides? Does more intensely advanced technology not demand AI progression?
None of which helps the development of strong AI. This is narrow AI created for a specific purpose that cannot generalise over different problem domains. It does not scale up.
(May 31, 2019 at 11:41 am)Aegon Wrote:(May 30, 2019 at 11:49 am)Mathilda Wrote: Elon Musk one of the brightest minds on the planet? You think he personally designed the SpaceX rockets, Tesla cars and AI? No. He hired people to do that for him. His talent is in spotting opportunity and hiring talent. He is not Tony Stark.
He doesn't have to personally design the rockets or cars to be one of the brightest minds on the planet. But he's working 18 hour days and sleeping in his factory office. What's he doing? Tweeting? Thinking of business strategy? He has significant involvement in the actual engineering and science of it. You really don't think he has a greater understanding of AI, as CEO of a company whose product revolves around utiliation of AI, than the average joe? I feel like you're exaggerating his lack of knowledge.
I don't have a problem with the idea that he might be "leading on" the public with his unrealistic expectations of AI technology, or that he might be wrong about some things. My issue is your bravado, as if the guy is completely clueless. He is plainly, obviously not.
Exactly. He's working 18 hour days on his own companies, none of which are in AI (any more). Tesla itself is failing. Take Karl Friston. He has a similar workload but he devotes himself to brain sciences and he is someone who clearly has a lot of useful expertise relevant to the field of AI. But I wouldn't automatically say that he had the expertise to found and run a company.
Sure I am exaggerating by saying Elon Musk knows fuck all about AI. He clearly knows more than the average person in the street. Not as much as an MSc student though. If I spoke to a PhD student I could expect to learn something new and interesting. If I wanted to learn about the AI that a company he invested in is working on then I'd go to the lead scientist / engineer. I wouldn't ask Elon Musk.
Using your logic, politicians must be some of the brightest people on the planet because they have to know the basics of so many different aspects of society. In practice though, they get given a single A4 summary and the rest gets delegated to the appropriate experts. The same happens in business. Sure they have to be intelligent to be able to pick up and understand such topics very quickly, but that's a far cry from having any real experience in it.