RE: Can we build AI without losing control over it? | Sam Harris
November 5, 2016 at 7:24 am
(This post was last modified: November 5, 2016 at 7:24 am by Excited Penguin.)
(November 5, 2016 at 5:46 am)Mathilda Wrote:(November 5, 2016 at 1:39 am)Alasdair Ham Wrote: Which guy?
Looks to me like you're right about Mathilda being in the minority when it comes to the experts... according to RationalWiki:
Source:
http://rationalwiki.org/wiki/Existential_risk
My emphasis.
Saying that we can theoretically reproduce it given enough computing power is that very different from saying that we will eventually reproduce it .
And there's equivocation here about reproduce. Does it mean here to simulate it or create it?
You can't create human consciousness in something that is not human because then it would not be human consciousness.
If that's what Rationalwiki is saying then it is wrong and I don't think many people working in the field of artificial consciousness would disagree with me on that.
I believe that we can give AI consciousness for example. I think we should and I believe that it's not a hard problem. It won't ever be human consciousness though. I believe that given enough processing power we can simulate or approximate human consciousness, but that isn't actually human consciousness.
So you believe we can give it consciousness and that doesn't scare you. . .
Yes, you are off your rockers and all this is is an overreaction to what you must be perceiving as an attack to how you make your living.
You can't find work as an a. I. Researcher. Seriously . You must be a really bad one given how much funding there's going into it right now.
No one cares that you're researching or trying to make it. All anyone cares is about the safety problem. Something you're comfortable not only dismissing for no reason at all but are actually vehemently fighting against. Why do you even care. Like what? ?