RE: Can we build AI without losing control over it? | Sam Harris
November 4, 2016 at 5:59 pm
(This post was last modified: November 4, 2016 at 6:00 pm by Edwardo Piet.)
If you accept the premises and the conclusion and only disagree with a strawman of things he didn't say, like saying he was making an assumption of expotentiality when he didn't, then you're not even disagreeing with him.
I think emotions get in the way sometimes. First you said his assumptions weren't sound. Then you said you didn't have a problem with the assumptions. Then you said he was assuming exponentiality. Then I pointed out that he said the exact opposite. It's getting thinner and thinner until now you're agreeing with everything he said in principle and simply asking how in practice it's going to happen, you're already agreeing in principle, and now you've moved the goalpost to doubting that a superintelligent could become dangerous. I find it rather analogous to the God of the gaps getting smaller and smaller... until eventually it will be "Okay I agree in principle with what he said in this video but I disagree with what he said elsewhere", hehe.
I think emotions get in the way sometimes. First you said his assumptions weren't sound. Then you said you didn't have a problem with the assumptions. Then you said he was assuming exponentiality. Then I pointed out that he said the exact opposite. It's getting thinner and thinner until now you're agreeing with everything he said in principle and simply asking how in practice it's going to happen, you're already agreeing in principle, and now you've moved the goalpost to doubting that a superintelligent could become dangerous. I find it rather analogous to the God of the gaps getting smaller and smaller... until eventually it will be "Okay I agree in principle with what he said in this video but I disagree with what he said elsewhere", hehe.