RE: Will AI ever = conciousness or sentience?
December 3, 2012 at 5:22 am
(This post was last modified: December 3, 2012 at 5:28 am by Angrboda.)
(December 3, 2012 at 4:12 am)DoktorZ Wrote:Quote:Free will alert, free will alert .. abort, abort. That was close.
Lol, exactly. How do we meaningfully distinguish a system that adjusts its behavior to produce optimal outcomes from a system that "desires" those outcomes--if such desires are rooted in a material thing like a brain?
Now, I'm carrying along in life merrily as if I truly possess free will, in part because of the ethical ramifications of not having it. But, the more dynamic systems theory I read, the less liberal humanism I actually am willing to accept. Some of the justifications I've read for free will seem to be the deformed discursive offspring of theology, and not adequately separated from it--sort of like that film "Basket Case."
More importantly, much of the advocacy for intention and free will seems to be rooted in our need for ethics. I have a pet theory that gradual examination of modern horrors like the Holocaust and the Great Famine in China has (re-)created a moral need for theories of "free will," and other sentimentalities of that sort.
I'm of the opinion that much of the defense of free will takes the form of a frantic attempts to save a particular view of ethics. Like Christians who hypothesize that you cannot have goodness and morality without God, people can't imagine how you can have a workable society without the ethics of personal responsibility, and personal responsibility without free will. I'm persuaded though, that when you look at the practical application of ethics, that of controlling behavior through law, incentives and punishments, the need for free will evaporates. As noted elsewhere, modern criminal punishment has four primary goals. 1) protection of society, by removing dangerous elements, 2) retribution or an eye for an eye, 3) rehabilitation, changing a problem behavior, 4) deterrence, providing an incentive for people to avoid those behaviors. Of these four goals, only retribution seems to depend on free will and moral culpability, and it has long been recognized as the theory of punishment with the most practical and ethical problems.
@whateverist:
I think you're engaged in a bit of question begging and a failure of imagination. You're enacting the very fear that DoktorZ voiced, that if you only acknowledge sentience of a certain pattern, the human pattern, you will blind yourself to other equally valid patterns. The questions of mortality and pain bring this to the fore. There's an analogous situation in that people are often said to imagine robots as a box with wheels, and given this mindset, it limits what they can imagine a robot to be. At bottom, unless you're advocating that we are something supernatural or basically inexplicable, we too are machines; biological machines, but machines nonetheless. If other machines are not considered to be capable of feeling pain, then we don't "really" feel pain either, and this pain you speak of becomes a mere placeholder for 'is human' or 'is biological'. At bottom, we are nothing more than boxes with wheels, too. Our skulls, and chest, the box, our brains the computer, and our arms and legs the wheels. Anything you can deny a non-biological machine from possessing, you must either equally deny to humans or animals, or find some supernatural explanation for it. Because if pain is just an idea in a biological machine prompted by certain dispositions of its systems, which it pretty clearly is, I don't see why you think a machine intelligence would be incapable of similar ideas. Perhaps you're imagining a machine intelligence as similar to a box with wheels, as a box, without receptors and sensors and actuators and the ability to monitor the status of its systems, just as our pain receptors do for us. If so, then you're illegitimately sneaking in rather self-serving limitations on this machine sentience and falling into the trap that DoktorZ outlined.
Let me offer a couple of hypotheticals.
First, imagine a future in which we've determined that the important functions of our biological life form are all macroscopically well described, that quantum effects play no significant role. We've progressed to the point that we can scan and digitize the entirety of a human being and store it, or process that scan for diagnostic purposes. Suppose, furthermore, that we can take these scans, and recreate the pattern contained in the image. We can create more of you just by scanning you and printing off copies. Now suppose you go off to explore a planet for a few years and you're eaten by a grue or something. But, we still have your scan from your last physical. So the doctors simply print off a new copy of you and pat you on the head and tell you to go your merry way. Does this mean that you didn't die? Does this mean you are no longer mortal? If not, how is this any different from disrupting the systems of a sentient machine and then resetting it to a state it might have been in at a prior time? (And if current machine intelligence is any indication of a trend, we would be as unable to recreate a meaningfully recent state of our machine intelligence in the same way that recreating you would be very difficult to impossible currently.)
One suggested scenario is that someday, instead of relying on humans to fight our wars, to go into urban centers and root out insurgents, we might create monkey soldiers to perform this job. This would be accomplished by attaching computers to a monkey's brain and nervous system, either externally or as implants, to monitor, adjust, and direct the monkey's brain so that the monkey performs the tasks we want it to perform. When we want it to enter and clear a building, we overlay its perception of building and human insurgents with patterns its monkey brain understands, like sexual rivals or the presence of a monkey from an enemy social group. Similarly, we can expect this computer system adjunct to the monkey to monitor the monkey's biological systems, its heart rate, its blood oxygen levels, indicators of fatigue and so on. It would seem a given that we would monitor its network of pain receptors to maintain an accurate understanding of the status of the monkey's biological tissues. Moreover, let's assume that this computer is not just a passive computational device, a box with wheels so-to-speak, but rather is a sentient machine. Is there any practical limitation that would prevent the monkey's copilot from experiencing the status of the monkey's biological pain receptors as pain? Let's take the hypothetical one step further and suppose an improvement upon this design. That instead of using non-biological computational devices, which are expensive to build, program and use, they learn to substitute a biological computer. Instead of a machine brain, they use specially developed and genetically modified cat brains that have been custom programmed and hooked up to perform the same computational tasks that the non-biological brain performed. If you say the non-biological "copilot brain" couldn't feel pain, then you'd seem obligated to conclude that the cat brain copilot can't feel pain either. And if the cat brain can't feel pain as a copilot, how is it able to feel pain as a normal cat brain? Where does this, "the system is incapable of feeling pain" enter into the question of machine sentience, both biological and non-biological, and what are the minimums for a system to "feel pain" ?
![[Image: extraordinarywoo-sig.jpg]](https://i.postimg.cc/zf86M5L7/extraordinarywoo-sig.jpg)