(November 4, 2016 at 3:46 pm)Rhythm Wrote: We -should- talk about those things....and we do, but shouldn;t we also talk about the potential risks of AI? Harris didn't advocate for sticking our heads in the ground on anything, in the video I watched. This is a complete "wtf" objection both to the man himself and to the presentation in any case. Maybe physicists should lay off the dangers of nuclear weapons and talk about the disappearance of the purple horned peckleswagger for a change?
It is on a par with talking to a physicist at the beginning of the 19th century and warning about trying to figure out the ether because one day we'll invent nuclear bombs and destroy ourselves. Or maybe you should tell Alex K not to work at Cern because it will lead to a planet-destroying Q bomb? I've seen Starship Troopers 3, I know how these things go.
What Harris is talking about is so far in the future that it is mere speculation as to what the technology will be if it appears at all. So how can he therefore warn of its dangers when we don't actually know what those dangers will be?
Yes there are risks that we can talk about in terms of AI, as with any field. Like for example automating drones to kill on sight. Automated surveillance for recognising faces and tracking people. Using algorithms to determine who gets insurance, jobs or loans. Sentiment analysis to trawl our electronic communication and stamp on dissent. Sure, talk about this. This needs talking about. I want to talk about this. And it's not a problem with AI but how we are currently using it, or could use it in the near future. But that's not what Sam Harris is doing. He's indulging in science fantasy.