Our server costs ~$56 per month to run. Please consider donating or becoming a Patron to help keep the site running. Help us gain new members by following us on Twitter and liking our page on Facebook!
Current time: November 21, 2024, 6:31 am

Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Machine Intelligence and Human Ethics
#1
Machine Intelligence and Human Ethics
I recently reread Carl Sagan's essay 'In Defense Of Robots', in which he makes his usually eloquent case for the unmanned exploration of space, among other things.

In the essay, he stresses the point that unmanned space vehicles will necessarily have to become more intelligent if they are to remain the best option for this type of work. But he stops short of at what seems (to me, at least) to be a vitally important question:  At what point does machine intelligence make it unethical to use spacebots for suicide missions?

Since Sagan wrote the piece in the late 1970s and even since his death almost a quarter century ago, machine intelligence has grown tremendously.  The self-awareness/sentience of computers appears to be a matter of not 'if', but 'when'.  Do we have any ethical justification for sending a machine of even rudimentary self-awareness to, say, Venus (where it will almost certainly be fried) or Jupiter (where it will almost certainly be crushed)?

I know that in the earliest days of space exploration dogs and chimps were used as test subjects and no one minded all that much.  But there is a feeling about animal 'rights' today that did not exist even a generation ago.  Might not the same be eventually true of machines?

Boru
‘I can’t be having with this.’ - Esmeralda Weatherwax
Reply
#2
RE: Machine Intelligence and Human Ethics
(May 26, 2019 at 3:24 pm)BrianSoddingBoru4 Wrote: I recently reread Carl Sagan's essay 'In Defense Of Robots', in which he makes his usually eloquent case for the unmanned exploration of space, among other things.

In the essay, he stresses the point that unmanned space vehicles will necessarily have to become more intelligent if they are to remain the best option for this type of work. But he stops short of at what seems (to me, at least) to be a vitally important question:  At what point does machine intelligence make it unethical to use spacebots for suicide missions?

Since Sagan wrote the piece in the late 1970s and even since his death almost a quarter century ago, machine intelligence has grown tremendously.  The self-awareness/sentience of computers appears to be a matter of not 'if', but 'when'.  Do we have any ethical justification for sending a machine of even rudimentary self-awareness to, say, Venus (where it will almost certainly be fried) or Jupiter (where it will almost certainly be crushed)?

I know that in the earliest days of space exploration dogs and chimps were used as test subjects and no one minded all that much.  But there is a feeling about animal 'rights' today that did not exist even a generation ago.  Might not the same be eventually true of machines?

Boru

Once the bot's know it's a suicide mission and tell us they don't want to go. Until then, it's just a mission, HAL. Dodgy

Side note: Just watched a vid where a dog owner died(recently) and her will stipulated that her pet dog be cremated and placed/planted with her. They followed the will, dogs are still considered property. I'm not sure that we have advanced as much as people like to think.

Here is the story (not what I watched): http://vt.co/news/us/healthy-dog-is-euth...ead-owner/
Being told you're delusional does not necessarily mean you're mental. 
Reply
#3
RE: Machine Intelligence and Human Ethics
(May 26, 2019 at 3:24 pm)BrianSoddingBoru4 Wrote: I know that in the earliest days of space exploration dogs and chimps were used as test subjects and no one minded all that much.  But there is a feeling about animal 'rights' today that did not exist even a generation ago.  Might not the same be eventually true of machines?

Boru

We already do, thanks in large part to sci-fi.

I have no doubt that there even will be for AGIs a question of legal status of personhood granted them. Probably only a few years after we've been able to "formulate" true AGI.
"The first principle is that you must not fool yourself — and you are the easiest person to fool." - Richard P. Feynman
Reply
#4
RE: Machine Intelligence and Human Ethics
Easy.

Simply program it to WANT to die....


(Or have it spend some time with my ex- wife. Essentially the same thing)
Reply
#5
RE: Machine Intelligence and Human Ethics
(May 26, 2019 at 3:37 pm)wyzas Wrote:
(May 26, 2019 at 3:24 pm)BrianSoddingBoru4 Wrote: I recently reread Carl Sagan's essay 'In Defense Of Robots', in which he makes his usually eloquent case for the unmanned exploration of space, among other things.

In the essay, he stresses the point that unmanned space vehicles will necessarily have to become more intelligent if they are to remain the best option for this type of work. But he stops short of at what seems (to me, at least) to be a vitally important question:  At what point does machine intelligence make it unethical to use spacebots for suicide missions?

Since Sagan wrote the piece in the late 1970s and even since his death almost a quarter century ago, machine intelligence has grown tremendously.  The self-awareness/sentience of computers appears to be a matter of not 'if', but 'when'.  Do we have any ethical justification for sending a machine of even rudimentary self-awareness to, say, Venus (where it will almost certainly be fried) or Jupiter (where it will almost certainly be crushed)?

I know that in the earliest days of space exploration dogs and chimps were used as test subjects and no one minded all that much.  But there is a feeling about animal 'rights' today that did not exist even a generation ago.  Might not the same be eventually true of machines?

Boru

Once the bot's know it's a suicide mission and tell us they don't want to go. Until then, it's just a mission, HAL. Dodgy

Side note: Just watched a vid where a dog owner died(recently) and her will stipulated that her pet dog be cremated and placed/planted with her. They followed the will, dogs are still considered property. I'm not sure that we have advanced as much as people like to think.

Here is the story (not what I watched): http://vt.co/news/us/healthy-dog-is-euth...ead-owner/

But would the machine even have the right to refuse the mission?  We're in the habit of compelling human beings to perform dangerous, life-threatening tasks, under threat of imprisonment if they refuse to comply.

Boru
‘I can’t be having with this.’ - Esmeralda Weatherwax
Reply
#6
RE: Machine Intelligence and Human Ethics
(May 26, 2019 at 4:01 pm)onlinebiker Wrote: Easy.

Simply program it to WANT to die....


(Or have it spend some time with my ex- wife. Essentially the same thing)

The question isn't 'can we' but 'should we'.

Boru
‘I can’t be having with this.’ - Esmeralda Weatherwax
Reply
#7
RE: Machine Intelligence and Human Ethics
Depends on the level of sentience they end up developing.

There will come a time, I suspect, when we may have to limit the level of intelligence of space probes because some will develop srlf awareness and self preservation

Playing Cluedo with my mum while I was at Uni:

"You did WHAT?  With WHO?  WHERE???"
Reply
#8
RE: Machine Intelligence and Human Ethics
(May 26, 2019 at 4:10 pm)BrianSoddingBoru4 Wrote:
(May 26, 2019 at 4:01 pm)onlinebiker Wrote: Easy.

Simply program it to WANT to die....


(Or have it spend some time with my ex- wife. Essentially the same thing)

The question isn't 'can we' but 'should we'.

Boru

Sure we should.

Even if a machine is self aware - it' s not going to get emotional about things. You would have to program it that way. You wouldn't.

So the machine isn't going to get all worked up and create a religion for itself to make it feel better about dying - unlike some carbon based lifeforms I won't mention....
Reply
#9
RE: Machine Intelligence and Human Ethics
I suppose it depends on one's approach to ethics. Different schools of thought have produced different theories about moral status. The most common idea (outside of philosophical circles) is anthropocentrism, that only humans deserve moral consideration. This would necessarily exclude artificial intelligence (obviously). So it's pretty cut-and-dried with them. But two other ways of thinking would come to bear on the issue depending on the possible mental attributes that an AI might possess.

The utilitarian hedonists (who claim that animals have moral status) would ask, "Can the artificial intelligence suffer?" Would it get lonely out there in the black expanse? Would the prospect of death by burning up in an alien atmosphere horrify it? If so, then the utilitarian would claim that using AIs in this way would be immoral. Like animals, anything that can suffer must be given moral consideration.

The Kantians grant moral status to any being who is rational and autonomous. So if we were dealing with an AI that can think for itself, and would rather not go on a suicide mission, a Kantian would say that it is immoral to send it on one (regardless if it has the capacity for suffering or not).

*****

As an aside, the idea of sentient probes relates to a science fiction story I'm working on. Set ienumerable centuries into humanity's future, our solar system is populated by various forms of humanoid life: Venetians, Martians, Jovians, and (of course) Earthlings. These are all humans evolved, which is the first "mini-plot twist" I've installed. But none of the different alien species realize this; this information is ancient and has been lost through the ages.

Anyway, a dozen centuries before when my story is set, scientists discovered extra-stellar intelligence and attempted contact. From the time of contact, scientists have failed to successfully communicate with the aliens, although their best guess is that it is an AI, since incoming messages are in binary. But one thing that has been determined is that the signals have been getting ever closer to the solar system.

Fearing an invasion, the different species in the solar system put aside their squabbles as the AI draws near. And it was good that they did this, because the AI is malevolent as hell. They make quick work of the defenses on Jupiter and Mars and make their way toward Earth, whose defenses pale in comparison to those of the Jovians'.

Meanwhile, an Earthling archeologist uncovers an ancient underwater city wherein the AI's "language" is found on computer records and is thus deciphered. Just in the nick of time, she is able to communicate the necessary information to military command who is able to use it to control the AI.

Second plot twist: An ancient human civilization actually sent out the probes (which are self-replicating) hundreds of thousands of years ago to mine materials from exoplanets and prepare them for human colonization. At one point the AI gained sentience and became destructive towards all life.

Obligatory dysphoric ending: Upon accessing the AI's computer logs, the archaeologist discovers that the probes have made it to the other side of the galaxy and has wiped out every form of life that it encountered. The Milky Way galaxy, once teeming with intelligent life is now populated by nothing but the self replicating probes launched from Earth eons ago.

I know it's a tad boiler plate, but it's my first attempt at sci fi, so I've allowed myself some tropes.
Reply
#10
RE: Machine Intelligence and Human Ethics
This was addressed in several episodes of Star Trek: The Next Generation. The most well-known was, "The Measure of a Man" in which Data is put on trial to determine whether or not he/it has human rights.

It seems clear to me. If you create a general artificial intelligence and there is any reasonable suspicion that that AI might be sentient in the same way humans are, the only ethical thing to do is to grant it freedom. Creating an AI as a slave is obviously unethical by any modern standard.
Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.

Albert Einstein
Reply



Possibly Related Threads...
Thread Author Replies Views Last Post
  Atheism and Ethics Lucian 262 15925 August 4, 2024 at 9:51 am
Last Post: Disagreeable
  Ethics of Neutrality John 6IX Breezy 16 2311 November 20, 2023 at 8:40 am
Last Post: Gawdzilla Sama
  Ethics of Fashion John 6IX Breezy 60 5708 August 9, 2022 at 3:11 pm
Last Post: The Grand Nudger
  [Serious] Ethics Disagreeable 44 5614 March 23, 2022 at 7:09 pm
Last Post: deepend
Information I hate human race,civilization and people in groups. MountainsWinAgain 48 16038 March 25, 2020 at 11:21 pm
Last Post: Macoleco
  Feral Children and the initial human state WinterHold 1 1048 December 10, 2018 at 5:00 am
Last Post: Maketakunai
  What is the point of multiple types of ethics? Macoleco 12 1617 October 2, 2018 at 12:35 pm
Last Post: robvalue
  Trolley Problem/Consistency in Ethics vulcanlogician 150 22293 January 30, 2018 at 11:01 pm
Last Post: bennyboy
  (LONG) "I Don't Know" as a Good Answer in Ethics vulcanlogician 69 11549 November 27, 2017 at 1:10 am
Last Post: vulcanlogician
  what are you ethics based on justin 50 18392 February 24, 2017 at 8:30 pm
Last Post: ignoramus



Users browsing this thread: 1 Guest(s)