RE: Philosophical zombies
March 3, 2018 at 9:10 pm
(This post was last modified: March 3, 2018 at 9:12 pm by polymath257.)
(March 3, 2018 at 8:03 pm)bennyboy Wrote:(March 2, 2018 at 9:13 am)polymath257 Wrote: Since I first read Chalmer's book where he introduced the idea of philosophical zombies, I have been convinced his argument is flawed. Consciousness is a matter of processing information in the brain. That means that anything physically identical to a conscious person will also be conscious.
Philosophers like to talk about the 'hard problem of consciousness', but I have to admit I have never grasped the fundamental difficulty. They seem t think that no physical explanation can be enough to explain our consciousness, but I see it as quite the opposite. Hell, we already have a LOT of the details of how consciousness arises in thebrain, from awareness in the brain step, to memory, to planning, etc.
Where is the gap?
Saying "consciousness is" and then saying anything more than "the subjective awareness of what things are like" is fine if you are programming robots, but it's not very solid if you are trying to discuss the philosophy of mind.
Generally, physicalists tend to conflate the subjective experience of mind with the physical correlates of mind: brain function, behaviors, etc. They say something like "If it walks like a duck and talks like a duck. . . it must think like a duck." But the fact is that there is not even the beginnings of a good theory of consciousness. Nobody has the faintest clue how it is that "dead" matter (i.e. unthinking stuff) arrives at a subjective experience of itself or its environment, under any configuration. Not only that, we cannot even determine whether any given physical system has any subjective experience of the universe. Is that animated little thing we found in a cave in Mars, that silicon-based "lifeform," really alive in the sense that I am?
This is soon to become a non-trivial issue. What happens when AI programs (Googlandra or whatever) are so convincingly human that they can elicit emotional responses in humans? We will have people (idiots in my opinion) believing that Googlandra is a thinking, feeling agent, and there will be movements to grant her rights and protections, and maybe to elect her for president.
(March 3, 2018 at 6:17 pm)polymath257 Wrote: Here is the problem, if you remove the part of the brain that produces consciousness, you, by necessity, affect the behavior of every other part of the brain *because* consciousness is so spread out. So, there would simply be no way to eliminate consciousness and have that removal be *undetectable*. In fact, I would go further, removing that capability would result in severe incapacity, even behaviorally.
The idea of epiphenominalism suggests that consciousness is produced by physical processes but is just an 'extra' that does nothing. My position is that consciousness is quite essential to do even ordinary human tasks. Removal of awareness would immediately lead to severe problems in many aspects of our existence which would be clearly visible to any bystander (sort of like an extreme mental illness, which it would be).
So, again, even a pseudo-P-zombie is an incoherency, as far as I can see. if you pass as conscious, you will *be* conscious.
There is no such part. You might just as well talk about removing someone's soul.
(March 3, 2018 at 5:37 pm)Khemikal Wrote: It would, strongly. The reason that p zombies are an amusing proposition is that they posit something so subtle that it flies right under the radar. That however that thing might be aware of itself (as, for example..any common machine with self referential data collection - like a machine that knows it's supposed to respond with some relevant information about some part of it's system to a call of "how are you"), it would not be the same, it would be meaningfully different, from us.
The proposition is that the p zombie would be capable of all of our behavior, a carbon copy..with one difference. One answer to that is that it couldn;t, in fact, be a behavioral carbon copy to us without possessing that "x" because that "x" is what drives those behaviors. We can posit that it need not be, and that's the p-zombie prop..but..in humans, it is, and so a mechanically equavalent human would be a functionally equivalent human. Any explanation for some difference would hide many other different things than the sole difference referenced.
P zombies are a congnitive trap, lol.
There are a lot of differences between me and between an Android. I cannot know if an Android really feels, or just uses complex programming and AI algorithms to make it appear that it does.
Will you allow sufficiently human-seeming Androids full legal protections? Allow them to take jobs instead of your children? Allow them to "declare as human" and run for president?
(March 3, 2018 at 5:34 pm)LadyForCamus Wrote: If a Being is capable of clearly, verbally communicating that it is aware of itself, doesn’t it follow that it is self-aware?
Absolutely not, unless you define "self-aware" in those terms.
But the problem is this: I have a particular type of self-awareness that allows me to know what it feels like to watch a sunset or to drink a cup of hot chocolate. I do not believe this to be the same as a robot that can determine the chemical composition of fluids it has taken in and then verbalizing that composition.
Unless, that is, the Universe is panpsychic. Then it's all bets off.
On the contrary, the 'duck' theory is about the only possible theory of consciousness we can have. We look at things that are 'unquestionably' conscious, like human beings and see whether or not other things have the correlated properties. Those that do are conscious. That is an operational definition that seems perfectly good and consistent.
And I'm not saying that we currently have a description of consciousness at the level of neural activity. But I see no fundamental reason why it should be impossible. In fact, given enough time and energy, I think we can find the relevant correlates and solve the problem. And yes, those correlates would *be* an expplantion of consciousness and how 'dead' matter becomes conscious.
In a very similar way, we have the main answers for how 'dead matter' becomes alive: life is a complex collection of chemical reactions that allow homeostasis, reproduction, grown, etc.
Yes, when androids become conscious (by this definition), then they should be allowed all 'human' rights. To do anything else would be, in my mind, immoral.
See, I *do* think our ability to wax eloquent about a sunset is a 'biological robot', i.e, us, taking in information and verbalizing that composition. The only difference I can see is one is silicon based and the other is carbon based.