RE: Can we build AI without losing control over it? | Sam Harris
November 5, 2016 at 1:39 am
(This post was last modified: November 5, 2016 at 1:39 am by Edwardo Piet.)
Which guy?
Looks to me like you're right about Mathilda being in the minority when it comes to the experts... according to RationalWiki:
Source:
http://rationalwiki.org/wiki/Existential_risk
My emphasis.
Looks to me like you're right about Mathilda being in the minority when it comes to the experts... according to RationalWiki:
RationalWiki.com Wrote:An artificial general intelligence deciding that humanity is an impediment or superfluous to meeting its goals. Though it is disputed whether this is an X-risk we need to worry about in the short term (many actual AI researchers don't think so), it probably is in the long term. This is because the number of scientists and academics who think we will never be able to reproduce human consciousness in a machine is quite small, and even they aren't necessarily sure that their arguments are true, or relevant (AIs might not need to be either conscious or human-like to be dangerous, for example).
Source:
http://rationalwiki.org/wiki/Existential_risk
My emphasis.