(November 4, 2016 at 6:05 pm)Alasdair Ham Wrote: Well no, that isn't part of his argument so it's not important.
We just need to assume that it's inevitable eventually.
No. We cannot make that assumption for the myriad of reasons I have already laid out.
It might happen. It very well might not.
Personally my best guess is as someone who develops strong AI is that we'll end up with the robot equivalent of animals for very specific environments. We've been using animals for a very long time now.
(November 4, 2016 at 6:05 pm)Alasdair Ham Wrote: AI is getting better and better and one day will be so much smarter than us that we will be insignificant to it.
OK I'm going to post here what I wrote on TTA. I was trying to avoid this but I really don't want to have to write it out again.
Quote:I absolutely do not think that AI research has reached a critical point, but I wouldn't blame anyone for thinking that it has. The problem is one of scalability and many companies and research projects have failed because people do not appreciate this pitfall. They create a prototype or some smart program that works in a very specialised case and then think that because it works for that then they can do something useful with it. But then they find that they can't scale up their AI.
It's called the curse of dimensionality and it doesn't just affect AI but any computer program where you need to take into account multiple real world variables. Say for example you have 10 different sensors on a fighter aircraft and want to predict when a component will fail. You could plot all this on a graph and try and analyse the hyperdimensional space. Add one more sensor though and the space that you need to work with grows exponentially. The travelling salesman problem is a classic example of this. Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city exactly once and returns to the origin city? Add too many cities and you very soon run out of enough computing time available in the history of the universe.
There is a reason why the human brain has so many neurons and such a high connectivity between each neuron.
So the problem is that people expect progress to follow exponential curve. It's taken this long to be able to achieve self driving cars what will happen in another five years? But what they aren't taking into account is that we reached this point because of the exponential doubling of processing power from Moore's law over many decades. Our understanding of intelligence hasn't progressed as fast, in fact nothing else has. And Moore's law is coming to an end.
Another aspect to the issue of scalability, is that each small progression in AI has large changes because it affects many people. So for example, there used to be whole call centres full of people that can now be replaced by an automated system. The automated system is extremely constrained but that doesn't matter much to the thousands of people looking for new jobs. Sure, talk about this, it's an issue of socio-economics rather than AI. What Sam Harris is talking about is science fantasy.