So I'm in general agreement that Sam Harris doesn't really know what he's talking about when it comes to AI, as with a lot of subjects (he's spoken multiple times in favor of racial profiling).
The simple answer to the question "Can we build AI without losing control over it?" is YES.
People seem to hear "AI" and think "robots". The two are completely different things. When you think "AI" you should just think "electronic brain", because that's all it is. A computer that is advanced enough that it can "think" about a problem and come up with a solution. We already have AI, it's just very dumb intelligence, programmed to perform basic actions based on inputs (e.g. Tesla's Autopilot).
So if the goal is "human-like" intelligence, e.g. a computer you can converse with, have rational arguments with (a computer that thinks up new arguments would be cool), is there a danger of losing control of it? Well, it depends on how much power you give it. How much danger does a human really pose if they have no arms or legs? Not much, really. A computer sitting in a room, even if it were loaded with the most advanced AI in the world, would still just be a computer sitting in a room. AI doesn't suddenly grant the computer the ability to move.
However one potentially disastrous thing we could do with an advanced AI is connect it to the Internet. Given how the Internet is ridiculously vulnerable to cyber attacks, an AI could easily wipe out a lot of human infrastructure. So in that sense, yeah we could lose control of it.
The simple answer to the question "Can we build AI without losing control over it?" is YES.
People seem to hear "AI" and think "robots". The two are completely different things. When you think "AI" you should just think "electronic brain", because that's all it is. A computer that is advanced enough that it can "think" about a problem and come up with a solution. We already have AI, it's just very dumb intelligence, programmed to perform basic actions based on inputs (e.g. Tesla's Autopilot).
So if the goal is "human-like" intelligence, e.g. a computer you can converse with, have rational arguments with (a computer that thinks up new arguments would be cool), is there a danger of losing control of it? Well, it depends on how much power you give it. How much danger does a human really pose if they have no arms or legs? Not much, really. A computer sitting in a room, even if it were loaded with the most advanced AI in the world, would still just be a computer sitting in a room. AI doesn't suddenly grant the computer the ability to move.
However one potentially disastrous thing we could do with an advanced AI is connect it to the Internet. Given how the Internet is ridiculously vulnerable to cyber attacks, an AI could easily wipe out a lot of human infrastructure. So in that sense, yeah we could lose control of it.