Our server costs ~$56 per month to run. Please consider donating or becoming a Patron to help keep the site running. Help us gain new members by following us on Twitter and liking our page on Facebook!
Current time: November 29, 2024, 4:34 am

Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Seeing red
RE: Seeing red
(January 23, 2016 at 6:48 pm)ChadWooters Wrote:
(January 22, 2016 at 8:07 pm)Jörmungandr Wrote: Its instructions and the data in the hardware are very real physical manifestations.
Of what are they manifestations?

Your question makes no sense. They are manifestations of matter/energy.
[Image: extraordinarywoo-sig.jpg]
Reply
RE: Seeing red
(January 23, 2016 at 6:50 pm)Jörmungandr Wrote:
(January 23, 2016 at 6:48 pm)ChadWooters Wrote: Of what are they [instructions and data] manifestations?
Your question makes no sense.  They are manifestations of matter/energy.
You see no difference between an image and what an image is made out of. They are identical to you. Is that what you are saying?
Reply
RE: Seeing red
(January 23, 2016 at 7:12 pm)ChadWooters Wrote:
(January 23, 2016 at 6:50 pm)Jörmungandr Wrote: Your question makes no sense.  They are manifestations of matter/energy.
You see no difference between an image and what an image is made out of. They are identical to you. Is that what you are saying?

You're not making any sense, Chad.  State your problem or whatever in plain language or don't.

Let's go back over the history of your question:
(January 22, 2016 at 8:07 pm)Jörmungandr Wrote: In other words, it's not really a mechanical device because "it's imbued with magic."   No, you can use whatever nomenclature you like, it doesn't change the fact that a mechanical device has intentionality.  Its instructions and the data in the hardware are very real physical manifestations. All your talk about final causes doesn't change that.  And unless those instructions are expressed in the hardware of the robot, they do not function at all.  The efficacy depends upon being mated to the right hardware.  Period.
(January 23, 2016 at 6:48 pm)ChadWooters Wrote: Of what are they manifestations?
(January 23, 2016 at 6:50 pm)Jörmungandr Wrote: Your question makes no sense.  They are manifestations of matter/energy.
(January 23, 2016 at 7:12 pm)ChadWooters Wrote: You see no difference between an image and what an image is made out of. They are identical to you. Is that what you are saying?


You obviously have something on your mind, asking me these loaded questions, why don't you spill it? I tire of your leading.
[Image: extraordinarywoo-sig.jpg]
Reply
RE: Seeing red
(January 23, 2016 at 12:07 am)bennyboy Wrote:
(January 22, 2016 at 11:15 pm)Emjay Wrote: By common medium do you mean the body map? As in how you can feel referred pain in all the wrong places etc? Or are you saying that this common medium exists (in whatever non-existing sense you define Wink) wherever there is matter and that essentially forms blocks bound together by relationships etc so that say the brain... or what is active in the brain could form one such block and the bigger it is, the richer and more subjective it is? Or something else entirely? Wink
Well, it's clear that all our experiences are drawn together into a single sense of awareness.  What is it into/around which they are drawn together?  How is it that multiple, mostly independent systems manifest as a sense of unified experience?
One thing that a neural network is exceptionally good at is finding common denominators... ie stereotypes. If a pattern's there it will find it and represent it given enough interconnectivity between neurons, which is certainly the case in the brain. So given a system that uses a neural network to integrate sensory data from a lot of different sources, and coordinate it for the benefit of the whole organism, I believe it is inevitable that the network would identify the common denominator that all this sensory data applies to - the organism - and therefore neurally represent a self at various levels of abstraction - including the body map for instance, and perhaps at the apex of that, the single self... the part that appears to own all of this. It wouldn't matter what neuron(s) did the job... that would be changeable depending on how the network settles... such that with the split-brain patients who at least appear to have two consciousnesses, the network, split into two smaller networks would find them both settling into new states with different neurons coming to represent the self. And that would then be one of the states to be differentiated in consciousness with it taking the form of a single sense of awareness with the all perceptions revolving around it being a manifestation of the relationships and states it has to preserve in the network.
Reply
RE: Seeing red
(January 23, 2016 at 7:12 pm)ChadWooters Wrote:
(January 23, 2016 at 6:50 pm)Jörmungandr Wrote: Your question makes no sense.  They are manifestations of matter/energy.
You see no difference between an image and what an image is made out of. They are identical to you. Is that what you are saying?

They are part of a loop of mechanical-electrical interactions (diagram).

[Image: rep-system-01.jpg]

What is a representation at one level of description is also part of a system of mechanical interactions.  It is both.
[Image: extraordinarywoo-sig.jpg]
Reply
RE: Seeing red
(January 23, 2016 at 3:28 pm)Jörmungandr Wrote: I already gave you a definition but you seem determined to simply draw this into an argument about semantics.
Okay, I'm not trying to make a semantic argument. However, since I'm now looking for a physicalist perspective on mind, and since people are using non-physical words like "information" or "intent," I want to make sure what you mean.

Quote:Intention is what a system has when it represents an isomorphism of the environment and acts on that isomorphism to further its interests in its environment.  A galaxy doesn't form an isomorphic representation nor act on that representation.
What interests? Why does a given physical system have interests, and what about having them magically makes it experience qualia?

Quote:Or perhaps we're talking about how dog-like human structures are.  Or how robotic-like certain dog structures are.  What makes you think this is a response to your post about supervenience?  It's not.  But if you ask me I will answer.  Mind doesn't 'supervene' on mere matter of any configuration.  Mind is a representational system like that of the robots.  The human representational system is capable of greater flexibility and complexity, but this is a difference in degree, not kind.
Okay, what is it about the universe that, if certain kinds of information or processing is there, causes qualia to exist-- when there is nothing like this in "matter of any configuration"?

I'm really trying to see a purely physical perspective here, but I keep hearing about things which are immaterial, or at least non-concrete, and which are ambiguous or not clearly defined except in terms of arbitrary assessments.

Quote:Again with the dragging the conversation back to petty points.  I never pointed to "information processing" as an indicator of mind.
You aren't the only person in this thread, but I will try in future to address replies only to you and not including what I'm talking about with others.

Quote:  I pointed to representational systems as indicator of mind.  And by representational system, I don't mean just the robot itself or its computer.  By representational system I mean the entire set of the feedback loop which includes the sensors, which feed the computer, which feeds the actuators on the wheels which then feeds back upon the environment, which then feeds back into the sensors.  It is an entire economy in which "information processing" is only a component.
Oh, I get it. However, I don't see a non-arbitrary division between what a robot does and what, say, a galaxy does. Does a galaxy not intake light and materials, manipulate them, and output other light and materials? Does it not send practically infinite photons to neighboring galaxies in a cluster?

I still think we're just saying, "Mind is whatever seems human to us."
Reply
RE: Seeing red
(January 23, 2016 at 10:39 pm)bennyboy Wrote:
(January 23, 2016 at 3:28 pm)Jörmungandr Wrote: Intention is what a system has when it represents an isomorphism of the environment and acts on that isomorphism to further its interests in its environment.  A galaxy doesn't form an isomorphic representation nor act on that representation.
What interests?  Why does a given physical system have interests, and what about having them magically makes it experience qualia?

This was never about 'qualia'.  It was about the intentionality in a physical system.  Qualia implies consciousness.  I could explain how similar such systems account for qualia and consciousness, but that is beyond the scope of the original discussion which you seem to have hijacked with questions about consciousness.

(January 23, 2016 at 10:39 pm)bennyboy Wrote:
(January 23, 2016 at 3:28 pm)Jörmungandr Wrote: Or perhaps we're talking about how dog-like human structures are.  Or how robotic-like certain dog structures are.  What makes you think this is a response to your post about supervenience?  It's not.  But if you ask me I will answer.  Mind doesn't 'supervene' on mere matter of any configuration.  Mind is a representational system like that of the robots.  The human representational system is capable of greater flexibility and complexity, but this is a difference in degree, not kind.
Okay, what is it about the universe that, if certain kinds of information or processing is there, causes qualia to exist-- when there is nothing like this in "matter of any configuration"?

This is an argument from ignorance and also begs the question by implying that human mind is not a specific configuration of matter.

(January 23, 2016 at 10:39 pm)bennyboy Wrote: I'm really trying to see a purely physical perspective here, but I keep hearing about things which are immaterial, or at least non-concrete, and which are ambiguous or not clearly defined except in terms of arbitrary assessments.
. . . . . .
(January 23, 2016 at 3:28 pm)Jörmungandr Wrote:  I pointed to representational systems as indicator of mind.  And by representational system, I don't mean just the robot itself or its computer.  By representational system I mean the entire set of the feedback loop which includes the sensors, which feed the computer, which feeds the actuators on the wheels which then feeds back upon the environment, which then feeds back into the sensors.  It is an entire economy in which "information processing" is only a component.
Oh, I get it.  However, I don't see a non-arbitrary division between what a robot does and what, say, a galaxy does.  Does a galaxy not intake light and materials, manipulate them, and output other light and materials?  Does it not send practically infinite photons to neighboring galaxies in a cluster?

I still think we're just saying, "Mind is whatever seems human to us."

And I think you're skewing things to make a point.  I have in detail described the necessary components and constraints required for intentionality.  You've introduced vague parallels that don't fit the description of a representational system given.  And you ask why the two aren't the same thing?  Because they're not.  A galaxy doesn't fit the description already laid out.  Your constructing vague parallels while ignoring the specifics given is dishonest on your part.  Again you've hijacked questions about intentionality to make this about consciousness; it never was about mind per se, but I've been generous enough to indulge your questions.  If you're just going to drive like a lemming towards cookie cutter arguments like this, you will find your questions being ignored.  

I have defined in non-arbitrary terms how the representational systems in question function; the role of different aspects of the system, and the constraints upon them.  You have taken a system which only partially fulfills those roles and constraints and ask why it isn't the same.  A galaxy doesn't form a representational system.   Period.  Repeatedly asking why not is either a sign you've been dropped on your head or the expression of a rhetorical agenda.  I see it as the latter.  Unless you can make something of your question other than blase misunderstanding of descriptions given, I don't see the point of your current line of questioning.

(January 23, 2016 at 10:39 pm)bennyboy Wrote: I still think we're just saying, "Mind is whatever seems human to us."

And I tire of such facile arguments.   Of course a description of mind in terms of components is going to parallel a description of the human mind.  The human mind is the prototype for which we are seeking similarities of in other systems.  I haven't given a description of mind yet, so the only place you're pulling this from is a pre-rehearsed playbook.  If you keep trammeling the argument with dull stereotypical reactions like this, I'm going to start ignoring you.  I gave a description of a representational system that instances intentionality, not mind.  You are taking my descriptions of a non-mind system, and launching a rhetorical point upon it by blurring that distinction with unasked questions about 'mind'. I foresaw this days ago, but chose to indulge you. Whether you do this by design or simply because that's who you are, I don't care; I'm tiring of your stereotyped rhetorical maneuvers.
[Image: extraordinarywoo-sig.jpg]
Reply
RE: Seeing red
(January 23, 2016 at 11:27 pm)Jörmungandr Wrote: This was never about 'qualia'.  It was about the intentionality in a physical system.  Qualia implies consciousness.  I could explain how similar such systems account for qualia and consciousness, but that is beyond the scope of the original discussion which you seem to have hijacked with questions about consciousness.
The OP.

Quote:
(January 23, 2016 at 10:39 pm)bennyboy Wrote: Okay, what is it about the universe that, if certain kinds of information or processing is there, causes qualia to exist-- when there is nothing like this in "matter of any configuration"?
This is an argument from ignorance and also begs the question by implying that human mind is not a specific configuration of matter.
It's not an argument from ignorance. It's not even an argument, since I'm not putting forward a position right now. I want to know what particular physical structures have mind, and you are talking about isomorphism, intentionality, and so on. But all these words have value attributions which require mind, so are circular at best, as far as I can see.

Quote:And I think you're skewing things to make a point.  I have in detail described the necessary components and constraints required for intentionality.  You've introduced vague parallels that don't fit the description of a representational system given.  And you ask why the two aren't the same thing?  Because they're not.  A galaxy doesn't fit the description already laid out.  Your constructing vague parallels while ignoring the specifics given is dishonest on your part.  Again you've hijacked questions about intentionality to make this about consciousness; it never was about mind per se, but I've been generous enough to indulge your questions.  If you're just going to drive like a lemming towards cookie cutter arguments like this, you will find your questions being ignored.  
Alright, I'm done. I'm interested in the subject, but not in your tone.
Reply
RE: Seeing red
(January 23, 2016 at 10:39 pm)bennyboy Wrote:
(January 23, 2016 at 3:28 pm)Jörmungandr Wrote: Or perhaps we're talking about how dog-like human structures are.  Or how robotic-like certain dog structures are.  What makes you think this is a response to your post about supervenience?  It's not.  But if you ask me I will answer.  Mind doesn't 'supervene' on mere matter of any configuration.  Mind is a representational system like that of the robots.  The human representational system is capable of greater flexibility and complexity, but this is a difference in degree, not kind.
Okay, what is it about the universe that, if certain kinds of information or processing is there, causes qualia to exist-- when there is nothing like this in "matter of any configuration"?

No. That's a strawman argument. Certain kinds of processing do not 'cause qualia to exist'. Certain configurations of processing are qualia happening. You are suggesting that qualia are ontological entities in and of themselves. I don't think they are. Certain kinds of processing in our mind convince us that we are experiencing qualia. Qualia aren't entities; they're artifact of a specific representational system. Our 'mind' is a representation of a system that doesn't exist. This system has a body, and thoughts, and qualia, happening in a no-space space inside our heads, physically, even though there is no such system there.

[Image: rep-system-02.jpg]
[Image: extraordinarywoo-sig.jpg]
Reply
RE: Seeing red
Okay, let's revise the word "cause" to "is." Let's say that the brain, as a whole, coordinates input from various sources and collates them into an overall representation. So when you say "isomorphic," you mean that the representation in the brain mirrors, at least approximately, the world as the organism (or robot or whatever) experiences it, constrained by the physical limits of the organism to collect information about the world.

My question is still about levels. What states are representative, and which are just states? It's hard to verbalize what I'm asking here, but let me try an analogy. Let's say we are making a physical reprentation, i.e. a model, of the world. We could start by building little trees, little cars, etc., each of those in turn composed of leaves or wheels, each of those composed of subparts. We would know also that each subpart consisted of chemicals with specific structures, atoms with specific structures, etc. Some of those (like the QM mechanics) we might be completely oblivious to. At which level of organization do we start to accept that we are building part of a tree? Only when we have the finished model? Only when we have identifiable parts, like leaves and branches? Right from the start, since we know the chemical compounds we are making are part of that final tree, despite having nothing of tree-ness in them?

Let me ask you this: would you say that our mental representation is composed of parts, and those of subparts? If so, how many child nodes would you allow and still call an idea part of that representation? At what scale of order would you say, "That's just stuff" ?
Reply





Users browsing this thread: 5 Guest(s)