Posts: 8717
Threads: 128
Joined: March 1, 2012
Reputation:
53
RE: Pleasure and Joy
September 1, 2013 at 12:18 pm
A couple advantages of dualism are as follows:
Dualism does not conflate neuroscience with materialism. One is a science, the other is a philosophy.
Physical phenomena, in themselves, have no inherent meaning. Dualism provides a place, absent in materialism, for irreducible qualities like qualia and intentionality.
Unlike materialism, dualism affirms that the subjective contents of mental states have explanatory relevance while avoiding both epiphenomenalism and over-determination. This allows natural selection to reward rationality.
Posts: 302
Threads: 9
Joined: March 27, 2013
Reputation:
5
RE: Pleasure and Joy
September 1, 2013 at 12:23 pm
(August 31, 2013 at 7:18 am)Harris Wrote: Another thing, which astounds all scientists and philosophers, are the subjects of Grand Design and Supernatural Balance in the universe. Universe is a place of order, balance, and symmetry. Think of the earth if it moves only 1 cm towards or away from the sun, this short deviation from its path becomes a threat to life on earth.
Lol, Google is your friend, cuz you obviously never attended any science classes ...
Universe is driven by Chaos, orderly disorder if you will, and not on all scales as well, in sub-atomic world there are total random events (no cause at all).
BTW, Earth moves away from the Sun at the rate about 15 cm / year, again, Google is your friend
Posts: 3188
Threads: 8
Joined: December 9, 2011
Reputation:
31
RE: Pleasure and Joy
September 1, 2013 at 12:49 pm
(September 1, 2013 at 12:18 pm)ChadWooters Wrote: A couple advantages of dualism are as follows:
Dualism does not conflate neuroscience with materialism. One is a science, the other is a philosophy.
Physical phenomena, in themselves, have no inherent meaning. Dualism provides a place, absent in materialism, for irreducible qualities like qualia and intentionality.
Unlike materialism, dualism affirms that the subjective contents of mental states have explanatory relevance while avoiding both epiphenomenalism and over-determination. This allows natural selection to reward rationality.
How is any of this an advantage?
Existence of a logical and rational view of reality requires that there be no conflict between science and philosophy. What you call conflation is in fact consistency.
Qualities like qualia and intentionality do not need to be irreducible for physical phenomena to have objective (not inherent) meaning. There is a place for them within materialism, just not a place that regards them as irreducible. And the meaning and value assigned to physical phenomena by a consciousness is not diminished by consciousness itself being a physical phenomenon.
Unlike materialism, the affirmation of subjective contents of mental states given within dualism itself relies on inexplicable and unprovable premises. Whereas materialism affirms the explanatory relevance of subjective states while satisfactorily explaining the states themselves.
Posts: 9147
Threads: 83
Joined: May 22, 2013
Reputation:
45
RE: Pleasure and Joy
September 1, 2013 at 5:22 pm
(This post was last modified: September 1, 2013 at 5:27 pm by bennyboy.)
(September 1, 2013 at 11:32 am)genkaus Wrote: And as I've indicated many, many times before, direct access is not required to establish existence. Not for black-holes, dinosaurs or a murder and not for subjective experience either. The only assumption here is that subjective experience, like any other existent entity, have a specific form of existence and provide specific evidence of its existence. That assumption is made for all the other objects as well. Your continuous repetition of "if can't directly observe it, you can't know it exists" has been invalidated by science in multiple scenarios. You can "indicate" whatever you want as often as you want, but the post count isn't evidence of truth. The behaviors you are talking about are sufficient to establish brain function, not actual experience. If you are trying to establish that a black hole has fairies inside it, you can't get there from establishing that light seems to be bending in ways that indicate a black hole.
Quote:If I prove that the behavior requires actual experience of processed information, then I have proven that it is capable of actual experience. And guess what, this is precisely what the scientists ins the field of neuroscience have done.
You can't prove that behavior requires actual experience, since you are a physical monist. You can only prove that behavior requires brain function. It's simple: data in, processing, behavior out. No fairies required.
Quote:That is the test. How are you not getting this? If it walks like a duck and quacks like a duck, then it is a duck.
. . . or a duck-like mechanism, lacking ducky feelings.
Quote:Given, my argument is that subjective experience itself is mechanical, deterministic form of data-processing, I don't see the point of this statement.
The point is I don't accept that the Cyberboy 2000 actually experiences, and that the philosophical implications of allowing a world of Cyberboys to compete with humans for resources because they "experience" (read: behave like they experience) would mean a loss of that special quality of actual experience which I believe humans have, and Cyberboys may not.
Quote:Actually, we do know that it has imagination. All we have to do is look at the Chess program.
I do not accept your definition of imagination. Imagination requires an image or vision of the problem, not simply behaving like one has those things.
Quote:Is it possible? That reality is not what it seems? We are not talking about errors in perception here. Science accounts for those errors and has measures in place for correction. What you are suggesting here is the possibility that perception itself is invalid. Got a way to justify that possibility?
Absolutely, I do. We could be in a BIJ, a BIV, the Matrix, or the Mind of God. We could be another living entitity having a long dream. There are many scenarios in which the world as it seems, including all the experiences we have of science, could be non-representative of reality.
As you've pointed out, our perception is fundamentally flawed.
Quote:Do I really have to remind you that science does not deal in absolutes?
No, but reality might. Science is good at making bridges, but demanding that the universe conform to the physical monism required for science isn't to make a new discovery about it: it's as truth-seeking as religious insitutions declaring astronomy heretical. Either you can prove that your view represents reality, or you cannot. Currently, you cannot.
Quote:The reason this argument fails is that the parent/child statements are not identical. One addresses the nature of perception, the other addresses the nature of mind.
The statement "mind perceives reality" (parent) is axiomatic - make no mistake about that. But it says nothing about the nature of mind itself. Any question regarding the nature of mind is a separate consideration.
Nope. You are using experience to prove the nature of experience.
Quote: (September 1, 2013 at 6:25 am)bennyboy Wrote: Well, the Cyberboy 2000 says, "Yum yum, I want the chocolate ice cream, not the yucky strawberry," rubs its belly, drools a little, and opens its eyes slightly wider to show that what it is looking at "pleases" it. It stamps its little cyber-feet if you tell it no, and make annoying noises in the car on the way home. None of this means it's actually experiencing anything.
If we establish that the specific behavior is not present in the initial programming, then yes, it most certainly means that it is actually experiencing something. AI ALWAYS produces behaviors which are not specifically programmed. That's what AI is: a programmed simulated evolutionary process, resulting in behaviors which aren't necessarily predictable to even the programmers.
I think the problem here is that you are flip-flopping between semantic sets: mind-existent words, and physical-monist words. Sure, you can call computer processing "imagination" if that word is useful in explaining a model you have in AI. However, that word is already refers to my subjective experience of forming ideas, where abstract images flutter around a kind of mental canvas. You could called the Cyberboy's foot-stamping "experience," but that word already refers to my ability to see red as redness, not simply to the function of stopping at an intersection if I detect light of a particular frequency.
The problem comes when you try to conflate "imagination" and "experience" with my actual imagination, and my actual experience.
Posts: 1108
Threads: 33
Joined: June 4, 2013
Reputation:
18
RE: Pleasure and Joy
September 1, 2013 at 9:39 pm
(This post was last modified: September 1, 2013 at 9:40 pm by Walking Void.)
Quote: AI ALWAYS produces behaviors which are not specifically programmed. That's what AI is: a programmed simulated evolutionary process, resulting in behaviors which aren't necessarily predictable to even the programmers.
Err, what exactly do You imply by "aren't necessarily predictable to even the programmers"? With random generating code implemented, if there is any at all, sure, unless We know which value/magnitude will be procured We do not extrapolate any future results. But like all variables, even in calculus, the robot's capacity has real limits that We can assess. For example, a boolean only has 2 outcomes, and a string is limited to the numbers 0 to 9 and alphabet and any special characters used. An integer is limited to positive and negative whole numbers. A float or real variable is any real number including decimals. Conditional variables and statements are only as plentiful as the craftsmen allowed. Operators in deciding statements are only as versatile as the total number of possible outcomes that the craftsmen determined. All of this is within our control, our knowledge.
Posts: 9147
Threads: 83
Joined: May 22, 2013
Reputation:
45
RE: Pleasure and Joy
September 1, 2013 at 9:50 pm
(September 1, 2013 at 9:39 pm)Walking Void Wrote: Quote: AI ALWAYS produces behaviors which are not specifically programmed. That's what AI is: a programmed simulated evolutionary process, resulting in behaviors which aren't necessarily predictable to even the programmers.
Err, what exactly do You imply by "aren't necessarily predictable to even the programmers"? With random generating code implemented, if any is there at all, sure, unless We know which value/magnitude will be procured We do not extrapolate any future results. But like all variables, even in calculus, the robot's capacity has real limits that We can assess. For example, a boolean only has 2 outcomes, and a string is limited to the numbers 0 to 9 and alphabet and any special characters used. An integer is limited to positive and negative whole numbers. A float or real variable is any real number including decimals. Conditional variables and statements are only as plentiful as the craftsmen allowed. Operators in deciding statements are only as versatile as the total number of possible outcomes that the craftsmen determined. All of this is within our control, our knowledge. The robot's capacity is limited by 2^n states, where n is the number of bits in the system; given say 64GB of memory, that's not a very limiting limit. Given input from the external environment-- temperature, sound, etc., which is analog (i.e. not digital at all), then nobody can exactly predict how those states are going to unfold.
Posts: 1108
Threads: 33
Joined: June 4, 2013
Reputation:
18
RE: Pleasure and Joy
September 1, 2013 at 10:15 pm
(This post was last modified: September 1, 2013 at 11:23 pm by Walking Void.)
So, You have the binary representative of 2 to the (8*64*1000000000) bit value exponent, but those bits in the hard drive are meant to follow a set of rules limiting the diversity of each cell's capabilities to a simple instruction. For example, register the number 6 to variable F. Binary is written in numerous character sets like 8 digit sets (0 or 1 being the possible value for each digit), but the range for a set of 8 is limited to 256 characters. For logical operands in the CPU, they are only as complex as the maker decides to implement. For example, an instruction in Windows could be that which originates of trigonometric operands or even allocating a list to the hard drive or even including integer data from another drive to be operated on in the active drive (the classic "call"). The bits themselves are just placeholders that fit these limited functions as needed. The bits can be ordered together to be utmost complex but at the root, the cell for each bit is a basic little switch. The possibilities are only as deep as the machine language which in itself is simple but frustrating.
Posts: 9147
Threads: 83
Joined: May 22, 2013
Reputation:
45
RE: Pleasure and Joy
September 1, 2013 at 11:42 pm
(September 1, 2013 at 10:15 pm)Walking Void Wrote: So, You have the binary representative of 2 to the (8*64*1000000000) bit value exponent, but those bits in the hard drive are meant to follow a set of rules limiting the diversity of each cell's capabilities to a simple instruction. For example, register the number 6 to variable F. Binary is written in numerous character sets like 8 digit sets (0 or 1 being the possible value for each digit), but the range for a set of 8 is limited to 256 characters. For logical operands in the CPU, they are only as complex as the maker decides to implement. For example, an instruction in Windows could be that which originates of trigonometric operands or even allocating a list to the hard drive or even including integer data from another drive to be operated on in the active drive (the classic "call"). The bits themselves are just placeholders that fit these limited functions as needed. The bits can be ordered together to be utmost complex but at the root, the cell for each bit is a basic little switch. The possibilities are only as deep as the machine language which in itself is simple but frustrating. The mechanics are limited, but the variety of possible outcomes has no practical limits given an evolutionary approach to learning. In a pure artificial neural network, the programmer wouldn't even need to know what hardware the "behavior" would be output on. The main limitation is actually the ability to expose the system to a sufficient number of trials for it to refine its output.
Posts: 3188
Threads: 8
Joined: December 9, 2011
Reputation:
31
RE: Pleasure and Joy
September 2, 2013 at 5:41 am
(September 1, 2013 at 5:22 pm)bennyboy Wrote: You can "indicate" whatever you want as often as you want, but the post count isn't evidence of truth. The behaviors you are talking about are sufficient to establish brain function, not actual experience. If you are trying to establish that a black hole has fairies inside it, you can't get there from establishing that light seems to be bending in ways that indicate a black hole.
Within this analogy, you are the only one trying to establish that black hole has fairies inside it. All I'm doing is establishing the existence of a black-hole without indicating its cause.
If the light bends in ways indicative of a black-hole, then the most reasonable conclusion is that there is a black-hole causing it to bend. This statement so far says nothing about the fundamental nature of the black-hole itself. At this point, you can say that black-hole is caused due to superdense-matter or due to fairies.
Similarly, if an entity displays behavior indicative of experience, then the most reasonable conclusion is that actual experience is causing that behavior. This statement says nothing about the fundamental nature of experience itself. At this point, you can say that experience is caused due to brain-function or due to a soul.
However, further investigation of facts reveals that black-hole is made of superdense-matter and experience is caused die to brain function.
(September 1, 2013 at 5:22 pm)bennyboy Wrote: You can't prove that behavior requires actual experience, since you are a physical monist. You can only prove that behavior requires brain function. It's simple: data in, processing, behavior out. No fairies required.
Wrong. Again. As a physical monist, I regard experience as a form of internal data-processing. Therefore, for behavior specific to that data-processing to occur, experience must occur as well.
(September 1, 2013 at 5:22 pm)bennyboy Wrote: . . . or a duck-like mechanism, lacking ducky feelings.
No. It'd still be a duck.
(September 1, 2013 at 5:22 pm)bennyboy Wrote: The point is I don't accept that the Cyberboy 2000 actually experiences, and that the philosophical implications of allowing a world of Cyberboys to compete with humans for resources because they "experience" (read: behave like they experience) would mean a loss of that special quality of actual experience which I believe humans have, and Cyberboys may not.
Hat-trick. Three fallacies in one sentence. That has to be a record.
Strawman - If Cyberboy 2000 is capable of actual experience, then he'd have to be given the same rights as humans. Note that I've argued the exact opposite.
Appeal to Consequences - Acknowledging that Cyberboy 2000 is capable of experience would result in loss of 'special' status of humans. So, I'm not going to accept that.
Begging the question - Starting from the assumption that Cyberboy 2000 is not capable of experience and therefore rejecting all evidence of its behavior indicative of experience.
(September 1, 2013 at 5:22 pm)bennyboy Wrote: I do not accept your definition of imagination. Imagination requires an image or vision of the problem, not simply behaving like one has those things.
Imagination is not limited to visual cues. However, the program does have an image of the problem - that'd be the chess board you see on the screen. And if you look into its analysis, you'd see the possible future images of the chess-board based on its expectation of your moves. Any way you slice it, the program is imagining your future moves (since those haven't been made yet) and making its own moves in anticipation.
(September 1, 2013 at 5:22 pm)bennyboy Wrote: Absolutely, I do. We could be in a BIJ, a BIV, the Matrix, or the Mind of God. We could be another living entitity having a long dream. There are many scenarios in which the world as it seems, including all the experiences we have of science, could be non-representative of reality.
Coming up with a hypothetical and establishing that hypothetical as possible are two different things. Simply saying that "we could be in BIV or Matrix etc." is not sufficient. You have to establish it as logically coherent as well. And you have not done so.
(September 1, 2013 at 5:22 pm)bennyboy Wrote: As you've pointed out, our perception is fundamentally flawed.
I HAVE MOST CERTAINLY NOT. My position is that our perception may have occasional flaws, but it is fundamentally correct.
(September 1, 2013 at 5:22 pm)bennyboy Wrote: No, but reality might. Science is good at making bridges, but demanding that the universe conform to the physical monism required for science isn't to make a new discovery about it: it's as truth-seeking as religious insitutions declaring astronomy heretical. Either you can prove that your view represents reality, or you cannot. Currently, you cannot.
Actually, you can prove that your views represent reality - because of they didn't, the bridges would not stand. Science doesn't require the universe to conform to physical monism - it says that physical monism is an accurate representation of reality. Should that turn out not to be the case, then scientific theories based on it would contradict reality. And so far, they are in agreement.
(September 1, 2013 at 5:22 pm)bennyboy Wrote: Nope. You are using experience to prove the nature of experience.
That's precisely what I said - having assumed experience as the basis of knowledge and established its validity by consistent application, using it to examine and prove the nature of experience is not begging the question.
(September 1, 2013 at 5:22 pm)bennyboy Wrote: AI ALWAYS produces behaviors which are not specifically programmed. That's what AI is: a programmed simulated evolutionary process, resulting in behaviors which aren't necessarily predictable to even the programmers.
In that case, the AI developing the capacity to experience without it being programmed in it shouldn't be surprising.
(September 1, 2013 at 5:22 pm)bennyboy Wrote: I think the problem here is that you are flip-flopping between semantic sets: mind-existent words, and physical-monist words. Sure, you can call computer processing "imagination" if that word is useful in explaining a model you have in AI. However, that word is already refers to my subjective experience of forming ideas, where abstract images flutter around a kind of mental canvas. You could called the Cyberboy's foot-stamping "experience," but that word already refers to my ability to see red as redness, not simply to the function of stopping at an intersection if I detect light of a particular frequency.
The problem comes when you try to conflate "imagination" and "experience" with my actual imagination, and my actual experience.
I was wondering when you'd mover your goalposts to the semantic position.
Unfortunately for you, you do not have the copyright on mind-existent words. You do not get to start with the assumption that "experience", "imagination" etc. are words that are meaningless within physical-monist context and any application of those words within that context is a redefinition. And you most certainly do not get to do this without even providing a definition of the words which you regard as the true Scotsman.
Your imagination may be limited to fluttering images on a canvas - mine isn't. That does not mean I don't have "true" imagination. Also, imagination specifically refers to the process of forming particular kind of ideas - not your subjective experience of that process. Regarding Cyberboy's stamping of the foot - I never referred to it as "experience". I specifically referred to it as behavior resulting from the experience. The same way I'd refer to your stopping at a red light as behavior resulting from experience.
The processing of visual frequency received from the light is called "seeing red". Processing of this process itself is called "experiencing redness". Since you do not have the inherent code in your brain that results in your stopping at the red light, your action of stopping is the result of your subjective awareness, i.e. the latter process. In the same way, if the Cyberboy 2000 does not have the code in its brain where the direct processing of visual frequency results in stopping, then its behavior of stopping at the red light would also be result of the second process, i.e or its subjective awareness of redness. Disregarding it as "not actual experience" is a baseless proposition and a no true Scotsman fallacy.
Posts: 9147
Threads: 83
Joined: May 22, 2013
Reputation:
45
RE: Pleasure and Joy
September 2, 2013 at 6:16 am
genkaus Wrote:Wrong. Again. As a physical monist, I regard experience as a form of internal data-processing. Therefore, for behavior specific to that data-processing to occur, experience must occur as well. The problem is that I have actual experience, and you are defining something other than that as experience. You can play semantic games all you want, but the fact is that I am known (at least by me) to have a rich subjective awareness, and the Cyberboy 2000 can be known only to behave as though it does. You can conflate your definition with mine as much as you want, but that doesn't change the philosophical reality of what I am, and what the Cyberboy (as far as we are able to know) is not.
Quote:Begging the question - Starting from the assumption that Cyberboy 2000 is not capable of experience and therefore rejecting all evidence of its behavior indicative of experience.
Whether it does or does not actually experience is not known, or knowable. I know I experience, because I wake up in the morning and do just that. I'm willing to assume that other humans do that, because they seem similar enough to me in other regards that it's worth making that assumption.
If I've made a positive assertion about the existence/lack of Cyberboy's ability to actually experience, then I happily retract it. I don't, and can't know-- and neither can anyone else. But there is enough dissimilarity that I'm not willing to assume it just because it behaves like a human.
Quote:Imagination is not limited to visual cues. However, the program does have an image of the problem - that'd be the chess board you see on the screen. And if you look into its analysis, you'd see the possible future images of the chess-board based on its expectation of your moves. Any way you slice it, the program is imagining your future moves (since those haven't been made yet) and making its own moves in anticipation.
Nope. You're just taking good old-fashioned, imagination-less mechanism, and applying a mind-existent term to it as though its lack of actual experience means nothing.
Quote:Coming up with a hypothetical and establishing that hypothetical as possible are two different things. Simply saying that "we could be in BIV or Matrix etc." is not sufficient. You have to establish it as logically coherent as well. And you have not done so.
No I don't. All I have to do is show that the mind attempting to comprehend its own nature is a circle. And to say the obvious: circles are bad.
Quote:Actually, you can prove that your views represent reality - because of they didn't, the bridges would not stand.
Right. In the context in which physics is done-- looking at things, experimenting on them, and manipulating them to our benefit-- the bridge and the science that allow it to stand are perfectly real. Whether that reality is in the Matrix, or a BIJ, or the Mind of God, is irrelevant, so long as the bridge stands. So you've proven nothing about the ultimate nature of things-- whether they are purely physical, purely mental, a mix, or something different entirely. All you've proven is that in our reality (whatever it is) some things are consistent enough to make categorizing those consistencies useful.
Quote:Unfortunately for you, you do not have the copyright on mind-existent words. You do not get to start with the assumption that "experience", "imagination" etc. are words that are meaningless within physical-monist context and any application of those words within that context is a redefinition. And you most certainly do not get to do this without even providing a definition of the words which you regard as the true Scotsman.
As I said, you can apply any words you want to your model, in whatever capacity you want. However, the fact is that I wake up and begin a rich subjective experience, and you cannot prove that the Cyberboy 2000 does. Therefore, I'm not willing to engage in a conversation where it is demanded that Cyberboy's data processing is conflated with my actual experience as a thinking, feeling human.
|