RE: Machine Intelligence and Human Ethics
May 26, 2019 at 4:46 pm
(This post was last modified: May 26, 2019 at 6:22 pm by vulcanlogician.)
I suppose it depends on one's approach to ethics. Different schools of thought have produced different theories about moral status. The most common idea (outside of philosophical circles) is anthropocentrism, that only humans deserve moral consideration. This would necessarily exclude artificial intelligence (obviously). So it's pretty cut-and-dried with them. But two other ways of thinking would come to bear on the issue depending on the possible mental attributes that an AI might possess.
The utilitarian hedonists (who claim that animals have moral status) would ask, "Can the artificial intelligence suffer?" Would it get lonely out there in the black expanse? Would the prospect of death by burning up in an alien atmosphere horrify it? If so, then the utilitarian would claim that using AIs in this way would be immoral. Like animals, anything that can suffer must be given moral consideration.
The Kantians grant moral status to any being who is rational and autonomous. So if we were dealing with an AI that can think for itself, and would rather not go on a suicide mission, a Kantian would say that it is immoral to send it on one (regardless if it has the capacity for suffering or not).
*****
As an aside, the idea of sentient probes relates to a science fiction story I'm working on. Set ienumerable centuries into humanity's future, our solar system is populated by various forms of humanoid life: Venetians, Martians, Jovians, and (of course) Earthlings. These are all humans evolved, which is the first "mini-plot twist" I've installed. But none of the different alien species realize this; this information is ancient and has been lost through the ages.
Anyway, a dozen centuries before when my story is set, scientists discovered extra-stellar intelligence and attempted contact. From the time of contact, scientists have failed to successfully communicate with the aliens, although their best guess is that it is an AI, since incoming messages are in binary. But one thing that has been determined is that the signals have been getting ever closer to the solar system.
Fearing an invasion, the different species in the solar system put aside their squabbles as the AI draws near. And it was good that they did this, because the AI is malevolent as hell. They make quick work of the defenses on Jupiter and Mars and make their way toward Earth, whose defenses pale in comparison to those of the Jovians'.
Meanwhile, an Earthling archeologist uncovers an ancient underwater city wherein the AI's "language" is found on computer records and is thus deciphered. Just in the nick of time, she is able to communicate the necessary information to military command who is able to use it to control the AI.
Second plot twist: An ancient human civilization actually sent out the probes (which are self-replicating) hundreds of thousands of years ago to mine materials from exoplanets and prepare them for human colonization. At one point the AI gained sentience and became destructive towards all life.
Obligatory dysphoric ending: Upon accessing the AI's computer logs, the archaeologist discovers that the probes have made it to the other side of the galaxy and has wiped out every form of life that it encountered. The Milky Way galaxy, once teeming with intelligent life is now populated by nothing but the self replicating probes launched from Earth eons ago.
I know it's a tad boiler plate, but it's my first attempt at sci fi, so I've allowed myself some tropes.
The utilitarian hedonists (who claim that animals have moral status) would ask, "Can the artificial intelligence suffer?" Would it get lonely out there in the black expanse? Would the prospect of death by burning up in an alien atmosphere horrify it? If so, then the utilitarian would claim that using AIs in this way would be immoral. Like animals, anything that can suffer must be given moral consideration.
The Kantians grant moral status to any being who is rational and autonomous. So if we were dealing with an AI that can think for itself, and would rather not go on a suicide mission, a Kantian would say that it is immoral to send it on one (regardless if it has the capacity for suffering or not).
*****
As an aside, the idea of sentient probes relates to a science fiction story I'm working on. Set ienumerable centuries into humanity's future, our solar system is populated by various forms of humanoid life: Venetians, Martians, Jovians, and (of course) Earthlings. These are all humans evolved, which is the first "mini-plot twist" I've installed. But none of the different alien species realize this; this information is ancient and has been lost through the ages.
Anyway, a dozen centuries before when my story is set, scientists discovered extra-stellar intelligence and attempted contact. From the time of contact, scientists have failed to successfully communicate with the aliens, although their best guess is that it is an AI, since incoming messages are in binary. But one thing that has been determined is that the signals have been getting ever closer to the solar system.
Fearing an invasion, the different species in the solar system put aside their squabbles as the AI draws near. And it was good that they did this, because the AI is malevolent as hell. They make quick work of the defenses on Jupiter and Mars and make their way toward Earth, whose defenses pale in comparison to those of the Jovians'.
Meanwhile, an Earthling archeologist uncovers an ancient underwater city wherein the AI's "language" is found on computer records and is thus deciphered. Just in the nick of time, she is able to communicate the necessary information to military command who is able to use it to control the AI.
Second plot twist: An ancient human civilization actually sent out the probes (which are self-replicating) hundreds of thousands of years ago to mine materials from exoplanets and prepare them for human colonization. At one point the AI gained sentience and became destructive towards all life.
Obligatory dysphoric ending: Upon accessing the AI's computer logs, the archaeologist discovers that the probes have made it to the other side of the galaxy and has wiped out every form of life that it encountered. The Milky Way galaxy, once teeming with intelligent life is now populated by nothing but the self replicating probes launched from Earth eons ago.
I know it's a tad boiler plate, but it's my first attempt at sci fi, so I've allowed myself some tropes.