RE: Robot Rights.
April 18, 2012 at 12:24 pm
(This post was last modified: April 18, 2012 at 12:26 pm by NoMoreFaith.)
One of those nasty little questions that at first seems simple until you give it hard thought.
Personally, I think a machine has rights if it can be made to feel harm, and through that extension, you can apply a golden rule.
If the machine is capable of being harmed (and comprehend that harm) in a manner of which you would not want to be harmed yourself in the same situation, then it has that right.
However, I think true self-aware computerised intelligence would appear completely alien to ourselves. I think the error in a lot of AI creation is trying to make computers act like humans, despite a completely different construction and methodology of comprehension.
In this way, intelligence should be viewed in the same way we would approach alien intelligence. If the method of intelligence is substantially different, with different motivations behind the decision making process, then it is important, in terms of preventing harm that would to us be immoral.
Asimov got there a long time ago, with I, Robot.
Without restriction, our own values, and self-protection would be at risk if you allowed an alien, or robot intelligence too much power over mankind.
Personally, I think a machine has rights if it can be made to feel harm, and through that extension, you can apply a golden rule.
If the machine is capable of being harmed (and comprehend that harm) in a manner of which you would not want to be harmed yourself in the same situation, then it has that right.
However, I think true self-aware computerised intelligence would appear completely alien to ourselves. I think the error in a lot of AI creation is trying to make computers act like humans, despite a completely different construction and methodology of comprehension.
In this way, intelligence should be viewed in the same way we would approach alien intelligence. If the method of intelligence is substantially different, with different motivations behind the decision making process, then it is important, in terms of preventing harm that would to us be immoral.
Asimov got there a long time ago, with I, Robot.
Quote:1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Without restriction, our own values, and self-protection would be at risk if you allowed an alien, or robot intelligence too much power over mankind.
Self-authenticating private evidence is useless, because it is indistinguishable from the illusion of it. ― Kel, Kelosophy Blog
If you’re going to watch tele, you should watch Scooby Doo. That show was so cool because every time there’s a church with a ghoul, or a ghost in a school. They looked beneath the mask and what was inside?
The f**king janitor or the dude who runs the waterslide. Throughout history every mystery. Ever solved has turned out to be. Not Magic. ― Tim Minchin, Storm
If you’re going to watch tele, you should watch Scooby Doo. That show was so cool because every time there’s a church with a ghoul, or a ghost in a school. They looked beneath the mask and what was inside?
The f**king janitor or the dude who runs the waterslide. Throughout history every mystery. Ever solved has turned out to be. Not Magic. ― Tim Minchin, Storm