RE: Free will Argument against Divine Providence
August 12, 2013 at 8:26 am
(This post was last modified: August 12, 2013 at 8:29 am by bennyboy.)
(August 12, 2013 at 6:38 am)genkaus Wrote: True enough. Your definition doesn't work for two specific reasons:Only if it thinks experience is the same as brain function, and if it has access to an fMRI. But it's not the fault of the "entity" if it is not capable of experiencing as I do.
1. Adequacy - An adequate definition should be able to explain the concept or the phenomena to a person not already familiar with it. Do you think your definition can explain the meaning of awareness to an entity not capable of experience?
Quote:2. Operationalization - While studying the concept you do need to first operationalize it, i.e. specify the underlying principles, defined the limits etc. basically, make it good enough to be taken to the lab."Good" enough is just a euphemism for "compatible." And that's begging the question-- you make a definition in physical terms, and later, once the dust has settled and nobody realizes that an operational definition is different than the original, you say, "Look-- there's a picture of someone's mind-- right there on the computer screen." But you're not talking about their experience-- you're talking about blood flow in the brain.
Quote:Well, if we are defining reality based on our ideas, rather than vice versa, then maybe we should talk about some kind of idealistic monism. Because I think if you strip away all the ideas in physical monism, you are left with a bunch of wave functions, and no objects which can have attributes like will or the ability to experience.(August 12, 2013 at 5:03 am)bennyboy Wrote: All systems "process" all other systems to some degree, since they are all linked by gravity, and by the exchange of photons. Anyway, who's to say a particular collection of particles is a "system," and another is just a bunch of particles?
We are in the realm of ideology here, not one of objective reality.
Equivocation much? When we talk about data-processing systems, the "process" refers to a specific actions taking place with specific results - it does not automatically apply to all systems. And we are to say which collection of particles constitutes a system and which do not.
Quote:Well, if you accept the theoretical possibility of machines being capable of experience, then you are not begging the question. However, if that possibility were shown to be true, you would also have to accept that phenomena such as mind/sentience/experience are not limited to dualism and have meaningful existence within monist context as well.As I said, I accept the theoretical possibility, but I challenge you to prove it. It must be assumed, just as I assume that the text ascribed to "genkaus" comes from a sentient human mind. This is a pragmatic assumption-- but it cannot be "shown to be true."
Quote:As for the question of determining actual, Id say not all aspects of human behavior can be mimicked. In fact, not even the behavior of your goldfish can be mimicked without there being the capacity for experience. Certain facts of your consciousness - such as your preferences, your tastes, your aversions etc. develop and change over time as a result of your capacity to experience. Should the machine not be capable of experience, this development would not be seen in it.No, it doesn't actually have to be capable of experience. It just has to be able to process data AS THOUGH it were capable of experience. Because if it walks like a duck and quacks like a duck, we know for sure it feels like a duck-- right?

Quote:Now this is where things get complicated. We do not yet know which elements of consciousness define it, which elements are necessary consequences and which are biological extras and how they all interact with each-other. Would androids need to dream, or even sleep at all? Why would we create something capable of feeling pain or suffering or sadness? Maybe we could just get rid of those elements. Would we be able to simulate the emotion of sadness completely - not just the the external signs of it? I mean, we all have faked emotions from time to time, so within that context we are acting like an unfeeling machine. But if anyone was looking at our brains in that moment, they'd know that whether or not we were actually experiencing those emotions.We don't know. . . yet? I think what you meant to say is, "We don't know. . ." "Yet" is a predictive word, and I don't think you can demonstrate that the discovery of the elements you mentioned is guaranteed. . . or even possible.
Quote:That's a rather defeatist attitude. Tell me then, why should I believe that you are an actual human being and not a philosophical zombie? Or, try a simpler problem: how would you - without referring to mutually accessible visual data, convince me that you are not blind?As I've already said, non-solipsism is a pragmatic assumption, not a provable fact. I accept this assumption, because interacting with people I consider "real" is more interesting to me, and feels more natural to me, than not doing so.
As for blindness-- let's change your example. How would you explain to a worm (if you could speak wormy) what it means to see? Or how could a bat explain to you what it feels like to use natural sonar?
Most importantly, what is the chance that ALL human beings have some shortcoming of which they are unaware, which makes their view on the reality of the universe so hopelessly skewed that they get it all wrong? Are we as clever, and as capable of objective observation, as we think we are? I'm going to say-- almost for sure not. And yet we insist on defining reality exactly by those limitations. I suppose a worm would say, "What's all this seeing business you keep going on about? Make me taste it, or it's not real."