(November 4, 2016 at 3:30 pm)Rhythm Wrote: I'm not sure what's going on here, and maybe Harris doesn't know his shit when it comes to AI, but the people saying that just keep -agreeing- with the very points he actually raised...........what gives?
QFT.
His conclusion follows from his premises. If AI keeps getting better and we keep going and AI becomes able to think for itself at a level far smarter then our own then it certainly could easily be very dangerous.
I don't see how it makes him a hypocrite. He gave his 3 assumptions and maybe he's completely wrong about them, but if those assumptions are true then his argument follows.
Like he said, the improvement doesn't need to be exponential.
@Tibby
I don't remember Harris ever talking of racial profiling but I do remember him talking about religious profiling. He gave himself as an example as being profileable after he was accused of racism.
But yeah I don't agree with him on that either actually, as well as utilitarian aggregation, I think he's rather paranoid about terrorism. But I'm not really interested in politics anyways.
I'm mostly interesting in his stance on free will, lies, objective morality and his arguments against religion. I also think he's very good in debates. I agree with him a lot of things but I'm not interested in his politics. He seems rather right wing last I checked and... I think this is because he wrongly aggregates utility. Like... he thinks torture is justifiable hypothetically (in a thought experiment) if it saves huge amounts of innocent people because the suffering of one person is outweighed by saving many people... but I don't agree with him that suffering or happiness can be aggregated because of the consciousness barrier.