Our server costs ~$56 per month to run. Please consider donating or becoming a Patron to help keep the site running. Help us gain new members by following us on Twitter and liking our page on Facebook!
Current time: May 4, 2024, 7:06 am

Poll: Will artificial intelligence ever achieve true sentience?
This poll is closed.
There are good reasons to think this will never happen.
11.11%
3 11.11%
I can't prove it but absolutely not. The idea of artificial sentience is absurd..
11.11%
3 11.11%
There is no telling what the future may hold. It's a coin flip.
14.81%
4 14.81%
Yes, smart machines are trending in that direction already.
44.44%
12 44.44%
Absolutely yes and I can describe to you the mechanisms which make it possible in principle.
7.41%
2 7.41%
Other. (Please explain.)
11.11%
3 11.11%
Total 27 vote(s) 100%
* You voted for this item. [Show Results]

Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Will AI ever = conciousness or sentience?
#61
RE: Will AI ever = conciousness or sentience?
(November 29, 2012 at 3:52 pm)whateverist Wrote: I of course think you are conflating expert processing with something much more subjective in nature. Outward appearances will never provide adequate support for the existence of subjective states. It would be a much easier task to program a machine to fool a human observer than it would be to create the conditions where a contemplation program corresponds to anything near what we ourselves mean by contemplation.

No, you could not prove that a machine is conscious simply by intuition, of course, nor even cite it as evidence. What I am saying is that if you have a machine which behaves in such a way that the average person believes it is conscious based on it displaying characteristics so similar to what we call 'consciousness' in humans that the only obvious difference is that it is not biological, what do we do?

It's not an empty question, either; quite a few of our more important moral functions are conditional based on our perception of this. Many people have no moral qualm with the slaughter of animals for food because we tend to look at their consciousness as inferior to our own, thanks mainly to the apparent lack of (or greatly reduced) sentience. If you have an AI which seems humanlike in every regard, do we have to consider whether or not it has rights? Is it still a machine we can shut off and dismantle whenever we want? Can we make that decision on the basis of "we cannot prove it is conscious" if we cannot prove it within humans?

I think it might be a very sticky situation at some point, and if we achieve apparent consciousness in a machine before we can understand the mechanisms of consciousness, would it be ethical to treat machine intelligence as if it was lesser in value than our own?

(November 29, 2012 at 3:16 pm)Ryantology Wrote: Fooling a human observer is beside the point, though an interesting challenge for AI in its own right.

I don't know if it is beside the point, as there certainly are people who would be convinced enough by a machine's personhood to fight for its rights.

In any case, we have interesting times ahead.
Reply
#62
RE: Will AI ever = conciousness or sentience?
(December 1, 2012 at 5:46 am)DoubtVsFaith Wrote: @ the OP.

Yes.

Why not? If life came from "non-life" (or life that is so lifeless that it's virtually and practically "non-life") then why can't conscious and sentient biological life develop into and become sentient and conscious non-biologically mechanical life (and I don't mean mechanical in a "bad" way... mechanisms are more complicated than that. After all, suppose a paradoxical mechanism developed that allowed a mechanism that was both free and orderly?)? There's sexism and there's racism and there's species-ism but how about a new "label"(label (not that there will or won't be more to come)) for a perhaps currently unlabelled but already existent (perhaps) bias/prejudice/dogmatic attitude... or to put it less negatively: Something that perhaps isn't currently understood yet and sadly misleads us into unconsciously avoiding our true potential in this(these) world(s)/universe(s). What should we "label" this problem that there seems to seem to be to me to me to me? Should we label it positively, negatively or neutrally? Many people seem to seem to (at least in my view) see neutrality itself as hostile... but is it? How can it be if it's truly neutral? And can't it just as easily be friendly if it really can be hostile despite the fact that it's by definition neither? You can't have it both ways and have such balance (unless some supernatural/super-natural miracle (or perhaps logical paradox) was formed (or is forming)).

I would be very happy if some person or persons commented on my point of view, even if I don't get to respond to them because I would, frankly, and honestly, really just want to do my bit and make my mark by giving this stuff of thought some thought and pass on my message in a realistic way that hopefully moves enough sentient beings close enough to the ideal. And I do hope for some more minds trying to connect with my personal interpretation with their own personal interpretation.

I'm not sure I'm able to make complete sense of what you wrote. It sounds like you're speaking from a deep reservoir of thought and feeling on the subject, but only a very narrow slice of that came through. Not enough to resolve into something concrete for me. (I do that. I'll be so intensely feeling or intimate with something that I start skipping whole sections of things I need to communicate to share my thoughts. I not uncommonly will go to edit a post and realize I've left out an entire step or sentence or several.)

Anyway, I'm greatly intrigued by the promise of what seems to be behind the scant words you shared, but am not going to try to fill in your meaning with what I myself might guess that you mean and think (and feel). However, I would be delighted to hear more from you.


[Image: extraordinarywoo-sig.jpg]
Reply
#63
RE: Will AI ever = conciousness or sentience?
(December 1, 2012 at 4:01 pm)Ryantology Wrote: I think it might be a very sticky situation at some point, and if we achieve apparent consciousness in a machine before we can understand the mechanisms of consciousness, would it be ethical to treat machine intelligence as if it was lesser in value than our own?

Well we might at least reasonably treat it as though it was not mortal and did not feel pain. Those seem like pretty important criteria by which to differentiate between how we treat (hypothetically) conscious machines from humans and other biological creatures.

(November 29, 2012 at 3:16 pm)Ryantology Wrote:
Quote:Fooling a human observer is beside the point, though an interesting challenge for AI in its own right.

I don't know if it is beside the point, as there certainly are people who would be convinced enough by a machine's personhood to fight for its rights.

In any case, we have interesting times ahead.

I suspect you will be disappointed on this score at least but .. here's to interesting times.
Reply
#64
RE: Will AI ever = conciousness or sentience?
(December 1, 2012 at 8:07 pm)whateverist Wrote:
(December 1, 2012 at 4:01 pm)Ryantology Wrote: I think it might be a very sticky situation at some point, and if we achieve apparent consciousness in a machine before we can understand the mechanisms of consciousness, would it be ethical to treat machine intelligence as if it was lesser in value than our own?

Well we might at least reasonably treat it as though it was not mortal and did not feel pain. Those seem like pretty important criteria by which to differentiate between how we treat (hypothetically) conscious machines from humans and other biological creatures.

What do you mean when you say it isn't mortal? Can its life not end just as ours does? Why do you assume that it doesn't feel pain?


[Image: extraordinarywoo-sig.jpg]
Reply
#65
RE: Will AI ever = conciousness or sentience?
(December 2, 2012 at 5:20 pm)apophenia Wrote: Why do you assume that it doesn't feel pain?

Lack of a nervous system maybe?
Reply
#66
RE: Will AI ever = conciousness or sentience?
(December 2, 2012 at 5:20 pm)apophenia Wrote:
(December 1, 2012 at 8:07 pm)whateverist Wrote: Well we might at least reasonably treat it as though it was not mortal and did not feel pain. Those seem like pretty important criteria by which to differentiate between how we treat (hypothetically) conscious machines from humans and other biological creatures.

What do you mean when you say it isn't mortal? Can its life not end just as ours does? Why do you assume that it doesn't feel pain?



There have been times when I wished I could have the information provided by pain without the actual sensations. I think our smart machines may get my wish. I should think pain for machines will be optional and more easily ameliorated than for us.

Mortality seems to be question for living things. Even if machines are smart, "living" would seem to be a descriptor for biological beings only. And if they're not alive, they have nothing to fear from death.

In both these areas they would seem to have the advantage.
Reply
#67
RE: Will AI ever = conciousness or sentience?
(December 2, 2012 at 9:15 pm)Napoléon Wrote:
(December 2, 2012 at 5:20 pm)apophenia Wrote: Why do you assume that it doesn't feel pain?

Lack of a nervous system maybe?

The nervous system is just one, biological, application of a concept.

The introduction of malicious code into (and the methods of counteracting it within) a system is a act we have given a biological metaphor: 'viral infection'. A conscious computer could present its systems of damage/threat alert and response as the practical equivalent to our ability to utilize pain.
Reply
#68
RE: Will AI ever = conciousness or sentience?
(December 2, 2012 at 10:20 pm)Ryantology Wrote:
(December 2, 2012 at 9:15 pm)Napoléon Wrote: Lack of a nervous system maybe?

The nervous system is just one, biological, application of a concept.

The introduction of malicious code into (and the methods of counteracting it within) a system is a act we have given a biological metaphor: 'viral infection'. A conscious computer could present its systems of damage/threat alert and response as the practical equivalent to our ability to utilize pain.

Of course that biological 'application' is also the origin of the concept and whether it has application anywhere else is debatable.
Reply
#69
RE: Will AI ever = conciousness or sentience?
(December 1, 2012 at 12:00 pm)whateverist Wrote: Yes but it isn't an arbitrary boundary. Intelligence is the possession of the relevant information with the capacity to apply it appropriately to achieve desired outcomes. As "appropriately" approaches "optimally" we move from smart to smarter. But consciousness includes loads of special sauce. In addition to having the intelligence to perform smartly, to be conscious/sentient we should also care about those outcomes.

These two definitions seem to have a bit too much overlap for me, and they both rely on intentionality, which is notoriously difficult to identify. A lot of it seems based on ...

Quote:Free will alert, free will alert .. abort, abort. That was close.

Lol, exactly. How do we meaningfully distinguish a system that adjusts its behavior to produce optimal outcomes from a system that "desires" those outcomes--if such desires are rooted in a material thing like a brain?

Now, I'm carrying along in life merrily as if I truly possess free will, in part because of the ethical ramifications of not having it. But, the more dynamic systems theory I read, the less liberal humanism I actually am willing to accept. Some of the justifications I've read for free will seem to be the deformed discursive offspring of theology, and not adequately separated from it--sort of like that film "Basket Case."

More importantly, much of the advocacy for intention and free will seems to be rooted in our need for ethics. I have a pet theory that gradual examination of modern horrors like the Holocaust and the Great Famine in China has (re-)created a moral need for theories of "free will," and other sentimentalities of that sort. Wink

Quote:I'm appreciating hearing your thoughts on this and have more reaction to this post. But in the interest of having a life that is more than virtual, I need to get outside.

I can't say my IRL world is terribly interesting at the moment, being mostly composed of paperwork and past-due writing deadlines. What you're witnessing here is the byproduct of indefatigable procrastination on my part. This will probably come to a crashing halt when the better half discovers what I've been doing.

Z
I'm always in search for faith-free spaces. Let's make them, enlarge them, and enjoy them!
Bertrand Russell quotes!
Americans United for the Separation of Church and State -- if you haven't joined their Facebook page, do so by all means.
Reply
#70
RE: Will AI ever = conciousness or sentience?
(December 3, 2012 at 4:12 am)DoktorZ Wrote:
Quote:Free will alert, free will alert .. abort, abort. That was close.

Lol, exactly. How do we meaningfully distinguish a system that adjusts its behavior to produce optimal outcomes from a system that "desires" those outcomes--if such desires are rooted in a material thing like a brain?

Now, I'm carrying along in life merrily as if I truly possess free will, in part because of the ethical ramifications of not having it. But, the more dynamic systems theory I read, the less liberal humanism I actually am willing to accept. Some of the justifications I've read for free will seem to be the deformed discursive offspring of theology, and not adequately separated from it--sort of like that film "Basket Case."

More importantly, much of the advocacy for intention and free will seems to be rooted in our need for ethics. I have a pet theory that gradual examination of modern horrors like the Holocaust and the Great Famine in China has (re-)created a moral need for theories of "free will," and other sentimentalities of that sort. Wink

I'm of the opinion that much of the defense of free will takes the form of a frantic attempts to save a particular view of ethics. Like Christians who hypothesize that you cannot have goodness and morality without God, people can't imagine how you can have a workable society without the ethics of personal responsibility, and personal responsibility without free will. I'm persuaded though, that when you look at the practical application of ethics, that of controlling behavior through law, incentives and punishments, the need for free will evaporates. As noted elsewhere, modern criminal punishment has four primary goals. 1) protection of society, by removing dangerous elements, 2) retribution or an eye for an eye, 3) rehabilitation, changing a problem behavior, 4) deterrence, providing an incentive for people to avoid those behaviors. Of these four goals, only retribution seems to depend on free will and moral culpability, and it has long been recognized as the theory of punishment with the most practical and ethical problems.



@whateverist:

I think you're engaged in a bit of question begging and a failure of imagination. You're enacting the very fear that DoktorZ voiced, that if you only acknowledge sentience of a certain pattern, the human pattern, you will blind yourself to other equally valid patterns. The questions of mortality and pain bring this to the fore. There's an analogous situation in that people are often said to imagine robots as a box with wheels, and given this mindset, it limits what they can imagine a robot to be. At bottom, unless you're advocating that we are something supernatural or basically inexplicable, we too are machines; biological machines, but machines nonetheless. If other machines are not considered to be capable of feeling pain, then we don't "really" feel pain either, and this pain you speak of becomes a mere placeholder for 'is human' or 'is biological'. At bottom, we are nothing more than boxes with wheels, too. Our skulls, and chest, the box, our brains the computer, and our arms and legs the wheels. Anything you can deny a non-biological machine from possessing, you must either equally deny to humans or animals, or find some supernatural explanation for it. Because if pain is just an idea in a biological machine prompted by certain dispositions of its systems, which it pretty clearly is, I don't see why you think a machine intelligence would be incapable of similar ideas. Perhaps you're imagining a machine intelligence as similar to a box with wheels, as a box, without receptors and sensors and actuators and the ability to monitor the status of its systems, just as our pain receptors do for us. If so, then you're illegitimately sneaking in rather self-serving limitations on this machine sentience and falling into the trap that DoktorZ outlined.

Let me offer a couple of hypotheticals.


First, imagine a future in which we've determined that the important functions of our biological life form are all macroscopically well described, that quantum effects play no significant role. We've progressed to the point that we can scan and digitize the entirety of a human being and store it, or process that scan for diagnostic purposes. Suppose, furthermore, that we can take these scans, and recreate the pattern contained in the image. We can create more of you just by scanning you and printing off copies. Now suppose you go off to explore a planet for a few years and you're eaten by a grue or something. But, we still have your scan from your last physical. So the doctors simply print off a new copy of you and pat you on the head and tell you to go your merry way. Does this mean that you didn't die? Does this mean you are no longer mortal? If not, how is this any different from disrupting the systems of a sentient machine and then resetting it to a state it might have been in at a prior time? (And if current machine intelligence is any indication of a trend, we would be as unable to recreate a meaningfully recent state of our machine intelligence in the same way that recreating you would be very difficult to impossible currently.)


One suggested scenario is that someday, instead of relying on humans to fight our wars, to go into urban centers and root out insurgents, we might create monkey soldiers to perform this job. This would be accomplished by attaching computers to a monkey's brain and nervous system, either externally or as implants, to monitor, adjust, and direct the monkey's brain so that the monkey performs the tasks we want it to perform. When we want it to enter and clear a building, we overlay its perception of building and human insurgents with patterns its monkey brain understands, like sexual rivals or the presence of a monkey from an enemy social group. Similarly, we can expect this computer system adjunct to the monkey to monitor the monkey's biological systems, its heart rate, its blood oxygen levels, indicators of fatigue and so on. It would seem a given that we would monitor its network of pain receptors to maintain an accurate understanding of the status of the monkey's biological tissues. Moreover, let's assume that this computer is not just a passive computational device, a box with wheels so-to-speak, but rather is a sentient machine. Is there any practical limitation that would prevent the monkey's copilot from experiencing the status of the monkey's biological pain receptors as pain? Let's take the hypothetical one step further and suppose an improvement upon this design. That instead of using non-biological computational devices, which are expensive to build, program and use, they learn to substitute a biological computer. Instead of a machine brain, they use specially developed and genetically modified cat brains that have been custom programmed and hooked up to perform the same computational tasks that the non-biological brain performed. If you say the non-biological "copilot brain" couldn't feel pain, then you'd seem obligated to conclude that the cat brain copilot can't feel pain either. And if the cat brain can't feel pain as a copilot, how is it able to feel pain as a normal cat brain? Where does this, "the system is incapable of feeling pain" enter into the question of machine sentience, both biological and non-biological, and what are the minimums for a system to "feel pain" ?


[Image: extraordinarywoo-sig.jpg]
Reply



Possibly Related Threads...
Thread Author Replies Views Last Post
  Uploading Conciousness to Computer AFTT47 26 7770 January 29, 2015 at 3:50 pm
Last Post: Faith No More
Shocked The burden of proof relating to conciousness, free choice and rationality marx_2012 107 33863 December 6, 2014 at 12:40 am
Last Post: robvalue
  Sentience and Love BrokenQuill92 6 1504 March 23, 2014 at 6:50 pm
Last Post: bennyboy
  conciousness justin 18 3622 February 24, 2013 at 7:28 pm
Last Post: ManMachine
  Sentience Captain Scarlet 17 5150 December 29, 2010 at 7:51 am
Last Post: Edwardo Piet



Users browsing this thread: 1 Guest(s)