(September 1, 2013 at 5:22 pm)bennyboy Wrote: You can "indicate" whatever you want as often as you want, but the post count isn't evidence of truth. The behaviors you are talking about are sufficient to establish brain function, not actual experience. If you are trying to establish that a black hole has fairies inside it, you can't get there from establishing that light seems to be bending in ways that indicate a black hole.
Within this analogy, you are the only one trying to establish that black hole has fairies inside it. All I'm doing is establishing the existence of a black-hole without indicating its cause.
If the light bends in ways indicative of a black-hole, then the most reasonable conclusion is that there is a black-hole causing it to bend. This statement so far says nothing about the fundamental nature of the black-hole itself. At this point, you can say that black-hole is caused due to superdense-matter or due to fairies.
Similarly, if an entity displays behavior indicative of experience, then the most reasonable conclusion is that actual experience is causing that behavior. This statement says nothing about the fundamental nature of experience itself. At this point, you can say that experience is caused due to brain-function or due to a soul.
However, further investigation of facts reveals that black-hole is made of superdense-matter and experience is caused die to brain function.
(September 1, 2013 at 5:22 pm)bennyboy Wrote: You can't prove that behavior requires actual experience, since you are a physical monist. You can only prove that behavior requires brain function. It's simple: data in, processing, behavior out. No fairies required.
Wrong. Again. As a physical monist, I regard experience as a form of internal data-processing. Therefore, for behavior specific to that data-processing to occur, experience must occur as well.
(September 1, 2013 at 5:22 pm)bennyboy Wrote: . . . or a duck-like mechanism, lacking ducky feelings.
No. It'd still be a duck.
(September 1, 2013 at 5:22 pm)bennyboy Wrote: The point is I don't accept that the Cyberboy 2000 actually experiences, and that the philosophical implications of allowing a world of Cyberboys to compete with humans for resources because they "experience" (read: behave like they experience) would mean a loss of that special quality of actual experience which I believe humans have, and Cyberboys may not.
Hat-trick. Three fallacies in one sentence. That has to be a record.
Strawman - If Cyberboy 2000 is capable of actual experience, then he'd have to be given the same rights as humans. Note that I've argued the exact opposite.
Appeal to Consequences - Acknowledging that Cyberboy 2000 is capable of experience would result in loss of 'special' status of humans. So, I'm not going to accept that.
Begging the question - Starting from the assumption that Cyberboy 2000 is not capable of experience and therefore rejecting all evidence of its behavior indicative of experience.
(September 1, 2013 at 5:22 pm)bennyboy Wrote: I do not accept your definition of imagination. Imagination requires an image or vision of the problem, not simply behaving like one has those things.
Imagination is not limited to visual cues. However, the program does have an image of the problem - that'd be the chess board you see on the screen. And if you look into its analysis, you'd see the possible future images of the chess-board based on its expectation of your moves. Any way you slice it, the program is imagining your future moves (since those haven't been made yet) and making its own moves in anticipation.
(September 1, 2013 at 5:22 pm)bennyboy Wrote: Absolutely, I do. We could be in a BIJ, a BIV, the Matrix, or the Mind of God. We could be another living entitity having a long dream. There are many scenarios in which the world as it seems, including all the experiences we have of science, could be non-representative of reality.
Coming up with a hypothetical and establishing that hypothetical as possible are two different things. Simply saying that "we could be in BIV or Matrix etc." is not sufficient. You have to establish it as logically coherent as well. And you have not done so.
(September 1, 2013 at 5:22 pm)bennyboy Wrote: As you've pointed out, our perception is fundamentally flawed.
I HAVE MOST CERTAINLY NOT. My position is that our perception may have occasional flaws, but it is fundamentally correct.
(September 1, 2013 at 5:22 pm)bennyboy Wrote: No, but reality might. Science is good at making bridges, but demanding that the universe conform to the physical monism required for science isn't to make a new discovery about it: it's as truth-seeking as religious insitutions declaring astronomy heretical. Either you can prove that your view represents reality, or you cannot. Currently, you cannot.
Actually, you can prove that your views represent reality - because of they didn't, the bridges would not stand. Science doesn't require the universe to conform to physical monism - it says that physical monism is an accurate representation of reality. Should that turn out not to be the case, then scientific theories based on it would contradict reality. And so far, they are in agreement.
(September 1, 2013 at 5:22 pm)bennyboy Wrote: Nope. You are using experience to prove the nature of experience.
That's precisely what I said - having assumed experience as the basis of knowledge and established its validity by consistent application, using it to examine and prove the nature of experience is not begging the question.
(September 1, 2013 at 5:22 pm)bennyboy Wrote: AI ALWAYS produces behaviors which are not specifically programmed. That's what AI is: a programmed simulated evolutionary process, resulting in behaviors which aren't necessarily predictable to even the programmers.
In that case, the AI developing the capacity to experience without it being programmed in it shouldn't be surprising.
(September 1, 2013 at 5:22 pm)bennyboy Wrote: I think the problem here is that you are flip-flopping between semantic sets: mind-existent words, and physical-monist words. Sure, you can call computer processing "imagination" if that word is useful in explaining a model you have in AI. However, that word is already refers to my subjective experience of forming ideas, where abstract images flutter around a kind of mental canvas. You could called the Cyberboy's foot-stamping "experience," but that word already refers to my ability to see red as redness, not simply to the function of stopping at an intersection if I detect light of a particular frequency.
The problem comes when you try to conflate "imagination" and "experience" with my actual imagination, and my actual experience.
I was wondering when you'd mover your goalposts to the semantic position.
Unfortunately for you, you do not have the copyright on mind-existent words. You do not get to start with the assumption that "experience", "imagination" etc. are words that are meaningless within physical-monist context and any application of those words within that context is a redefinition. And you most certainly do not get to do this without even providing a definition of the words which you regard as the true Scotsman.
Your imagination may be limited to fluttering images on a canvas - mine isn't. That does not mean I don't have "true" imagination. Also, imagination specifically refers to the process of forming particular kind of ideas - not your subjective experience of that process. Regarding Cyberboy's stamping of the foot - I never referred to it as "experience". I specifically referred to it as behavior resulting from the experience. The same way I'd refer to your stopping at a red light as behavior resulting from experience.
The processing of visual frequency received from the light is called "seeing red". Processing of this process itself is called "experiencing redness". Since you do not have the inherent code in your brain that results in your stopping at the red light, your action of stopping is the result of your subjective awareness, i.e. the latter process. In the same way, if the Cyberboy 2000 does not have the code in its brain where the direct processing of visual frequency results in stopping, then its behavior of stopping at the red light would also be result of the second process, i.e or its subjective awareness of redness. Disregarding it as "not actual experience" is a baseless proposition and a no true Scotsman fallacy.