(November 4, 2016 at 6:19 pm)abaris Wrote:(November 4, 2016 at 6:15 pm)Alasdair Ham Wrote: @ Abaris
I'm not saying we intrinsically have to assume his premises. I'm saying we have to assume his premises in order for his conclusion to follow from his premises.
Sure, but why should we do that without reflecting on the scenario?
Just remember that what Harris is saying is purely hypothetical based on 3 assumptions. He's not talking about how it could happen or about anything else in practice. He's talking about how in his opinion it is probable, for the reasons he gives, that in the future A.I. could be dangerous if you accept the 3 assumptions he mentioned.
So I will make 3 submissions now:
1. I submit that if you don't see how a superintelligence can be dangerous... then I don't know what to say.
2. I submit that if you don't see how saying he's making the assumption of exponentiality is a misrepresentation of what he actually said when he specifically said he's not saying that... then I don't know what to say.
3. I submit that if you don't know how not knowing how A.I. is going to keep getting better is irrelevant if you've already accepted the premise that it is going to keep getting better... then I don't know what to say.