RE: Can we build AI without losing control over it? | Sam Harris
November 5, 2016 at 5:46 am
(This post was last modified: November 5, 2016 at 6:08 am by I_am_not_mafia.)
(November 5, 2016 at 1:39 am)Alasdair Ham Wrote: Which guy?
Looks to me like you're right about Mathilda being in the minority when it comes to the experts... according to RationalWiki:
RationalWiki.com Wrote:An artificial general intelligence deciding that humanity is an impediment or superfluous to meeting its goals. Though it is disputed whether this is an X-risk we need to worry about in the short term (many actual AI researchers don't think so), it probably is in the long term. This is because the number of scientists and academics who think we will never be able to reproduce human consciousness in a machine is quite small, and even they aren't necessarily sure that their arguments are true, or relevant (AIs might not need to be either conscious or human-like to be dangerous, for example).
Source:
http://rationalwiki.org/wiki/Existential_risk
My emphasis.
Saying that we can theoretically reproduce it given enough computing power is that very different from saying that we will eventually reproduce it .
And there's equivocation here about reproduce. Does it mean here to simulate it or create it?
You can't create human consciousness in something that is not human because then it would not be human consciousness.
If that's what Rationalwiki is saying then it is wrong and I don't think many people working in the field of artificial consciousness would disagree with me on that.
I believe that we can give AI consciousness for example. I think we should and I believe that it's not a hard problem. It won't ever be human consciousness though. I believe that given enough processing power we can simulate or approximate human consciousness, but that isn't actually human consciousness.