Our server costs ~$56 per month to run. Please consider donating or becoming a Patron to help keep the site running. Help us gain new members by following us on Twitter and liking our page on Facebook!
Current time: December 11, 2024, 6:07 am

Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Seeing red
RE: Seeing red
(January 22, 2016 at 11:02 pm)bennyboy Wrote:  I think hidden behind the various discussions of intention, representation, etc. is an implied substance dualism: we are looking for those things that we already consider meaningful, because we have minds.  However, our brain doesn't follow special physical rules; why then would it achieve a special state?

What special state has it achieved? (Special with respect to physicality....)
[Image: extraordinarywoo-sig.jpg]
Reply
RE: Seeing red
(January 22, 2016 at 11:13 pm)Jörmungandr Wrote: No it isn't a memory.  The information is represented in these states, but these states do not form a representational system.  Note that in the case of the robot, a representational system consists of information, a system for representing it, and an environment for which that representation is meaningful (and by meaningful here I mean that it contains information which, by its content, guides the behavior of the representational system; in the robot driver example, this element is reflected by the road being represented being amenable to the motion of the vehicle; a robot driver in mid-air or on its back wouldn't be representing objects in a meaningful sense).

Does not a single photon from a thousand light years away affect the state of even an entire galaxy? Isn't the way the galaxy unfolds a "behavior?"

It still seems to me that we are using a lot of substance-dualist words here: intention, meaning, even behavior. But it is WE as thinking humans who see intention, meaning and behavior in some systems.
Reply
RE: Seeing red
(January 22, 2016 at 11:17 pm)Jörmungandr Wrote:
(January 22, 2016 at 11:02 pm)bennyboy Wrote:  I think hidden behind the various discussions of intention, representation, etc. is an implied substance dualism: we are looking for those things that we already consider meaningful, because we have minds.  However, our brain doesn't follow special physical rules; why then would it achieve a special state?

What special state has it achieved? (Special with respect to physicality....)

That's a very good question, and it's exactly mine.  What's unique to the brain that isn't found all over the universe?
Reply
RE: Seeing red
(January 22, 2016 at 11:15 pm)Emjay Wrote: By common medium do you mean the body map? As in how you can feel referred pain in all the wrong places etc? Or are you saying that this common medium exists (in whatever non-existing sense you define Wink) wherever there is matter and that essentially forms blocks bound together by relationships etc so that say the brain... or what is active in the brain could form one such block and the bigger it is, the richer and more subjective it is? Or something else entirely? Wink
Well, it's clear that all our experiences are drawn together into a single sense of awareness. What is it into/around which they are drawn together? How is it that multiple, mostly independent systems manifest as a sense of unified experience?
Reply
RE: Seeing red
What's unique to a boat that isn't found all overt the universe?  What's unique to the wing of a plane?  Why doesn't everything float and fly like a floatplane?

@ The q directly above, bussing. Just one way to do it, our way is probably significantly more complicated.
I am the Infantry. I am my country’s strength in war, her deterrent in peace. I am the heart of the fight… wherever, whenever. I carry America’s faith and honor against her enemies. I am the Queen of Battle. I am what my country expects me to be, the best trained Soldier in the world. In the race for victory, I am swift, determined, and courageous, armed with a fierce will to win. Never will I fail my country’s trust. Always I fight on…through the foe, to the objective, to triumph overall. If necessary, I will fight to my death. By my steadfast courage, I have won more than 200 years of freedom. I yield not to weakness, to hunger, to cowardice, to fatigue, to superior odds, For I am mentally tough, physically strong, and morally straight. I forsake not, my country, my mission, my comrades, my sacred duty. I am relentless. I am always there, now and forever. I AM THE INFANTRY! FOLLOW ME!
Reply
RE: Seeing red
(January 23, 2016 at 2:17 am)Rhythm Wrote: What's unique to a boat that isn't found all overt the universe?  What's unique to the wing of a plane?  Why doesn't everything float and fly like a floatplane?

@ The q directly above, bussing.  Just one way to do it, our way is probably significantly more complicated.
There's nothing unique to a boat that isn't found all over the universe.  It's just a bunch of stuff held together, and because we have minds, we call that stuff a "boat."
Reply
RE: Seeing red
(January 23, 2016 at 12:02 am)bennyboy Wrote:
(January 22, 2016 at 11:13 pm)Jörmungandr Wrote: No it isn't a memory.  The information is represented in these states, but these states do not form a representational system.  Note that in the case of the robot, a representational system consists of information, a system for representing it, and an environment for which that representation is meaningful (and by meaningful here I mean that it contains information which, by its content, guides the behavior of the representational system; in the robot driver example, this element is reflected by the road being represented being amenable to the motion of the vehicle; a robot driver in mid-air or on its back wouldn't be representing objects in a meaningful sense).

Does not a single photon from a thousand light years away affect the state of even an entire galaxy?  Isn't the way the galaxy unfolds a "behavior?"

Why do you ask, do you need to equivocate on a few terms?

(January 23, 2016 at 12:02 am)bennyboy Wrote: It still seems to me that we are using a lot of substance-dualist words here: intention, meaning, even behavior.  But it is WE as thinking humans who see intention, meaning and behavior in some systems.

Or those systems really have it and all your posturing about WE as thinking humans is just special pleading.  "They have intentionality but it's not real intentionality, it's derived."  Yes I have the ability to describe the robots behavior in terms of meaning and intention.  Unless you're just begging the question, there is no difference between the applied use of these terms to describe the robot than there is to describe WE humans.  "The robots have aboutness but it's not the kind of aboutness that I have."  How would you know? All you're doing is suggesting that we have richer systems for intentionality and meaning in that we can use our concepts to ascribe meaning to the behavior of robots. That doesn't exclude the possibility that the robots behaviors are deserving of such description.

In an experiment at the Max Planck institute, dogs and chimpanzees were compared for their performance on a simple test. The test would put a small reward under one of two cups, and the experimenter would point to the cup that concealed the reward. The task was to find the reward. The dogs did well on the task, going straight to the cup with the reward, whereas the chimps were not helped by the pointing. Obviously the dogs understood that the pointing finger was 'about' something in a line from the end of the finger. They understood the intentionality of the gesture, and no amount of human reinterpreting can take that away from them.

A few additional points. First, it's clear that if dogs can possess intentionality, it doesn't take much hardware to implement it. Much as we love our dogs, they aren't exceptional as a species in terms of intelligence. Second, while the test was not performed with wolves, I suspect wolves would be just as stymied as the chimpanzees; and if not the wolves, at least a recent ancestor of them. This would show that it takes a relatively short time, 10,000 years, to evolve such intentional behavior. This suggests that the machinery for such behaviors already exists in species like the wolves or the chimpanzees, simply awaiting for evolution to tease it out into a manifest behavior. Third, it points to the possibility that intentionality can evolve. Unless we are postulating doggy souls, some mechanistic change occurred in the brain over the course of those 10,000 years to bring out this intentionality. This suggests that the substance necessary for such behaviors is a mechanism, not some mysterious ectoplasm.

So, no, it isn't only WE as thinking humans, but quite likely much of the animal kingdom.
[Image: extraordinarywoo-sig.jpg]
Reply
RE: Seeing red
(January 23, 2016 at 1:06 pm)Jörmungandr Wrote: Why do you ask, do you need to equivocate on a few terms?
Quite the contrary. I want a non-arbitrary definition

Quote:
(January 23, 2016 at 12:02 am)bennyboy Wrote: It still seems to me that we are using a lot of substance-dualist words here: intention, meaning, even behavior.  But it is WE as thinking humans who see intention, meaning and behavior in some systems.

Or those systems really have it and all your posturing about WE as thinking humans is just special pleading.  "They have intentionality but it's not real intentionality, it's derived."  Yes I have the ability to describe the robots behavior in terms of meaning and intention.  Unless you're just begging the question, there is no difference between the applied use of these terms to describe the robot than there is to describe WE humans.  "The robots have aboutness but it's not the kind of aboutness that I have."  How would you know?  All you're doing is suggesting that we have richer systems for intentionality and meaning in that we can use our concepts to ascribe meaning to the behavior of robots.  That doesn't exclude the possibility that the robots behaviors are deserving of such description.
I'm not talking about having or not having intention. I'm talking about defining it. What separates a system with "intention" from any other physical system? A brain is a collection of physical materials going through their individual processes, and so is a galaxy. Why do you say one is intentional, and one is just stuff happening?

Quote:In an experiment at the Max Planck institute, dogs and chimpanzees were compared for their performance on a simple test.  The test would put a small reward under one of two cups, and the experimenter would point to the cup that concealed the reward.  The task was to find the reward.  The dogs did well on the task, going straight to the cup with the reward, whereas the chimps were not helped by the pointing.  Obviously the dogs understood that the pointing finger was 'about' something in a line from the end of the finger.  They understood the intentionality of the gesture, and no amount of human reinterpreting can take that away from them.  
It seems to me that what we're really talking about is how human-like certain physical structures are. This is fine when you have an intact human-like organism, but it sheds little light on my post a couple pages ago: on what level of physical organization does mind supervene?

Quote:A few additional points.  First, it's clear that if dogs can possess intentionality, it doesn't take much hardware to implement it.  Much as we love our dogs, they aren't exceptional as a species in terms of intelligence.  Second, while the test was not performed with wolves, I suspect wolves would be just as stymied as the chimpanzees; and if not the wolves, at least a recent ancestor of them.  This would show that it takes a relatively short time, 10,000 years, to evolve such intentional behavior.  This suggests that the machinery for such behaviors already exists in species like the wolves or the chimpanzees, simply awaiting for evolution to tease it out into a manifest behavior.  Third, it points to the possibility that intentionality can evolve.  Unless we are postulating doggy souls, some mechanistic change occurred in the brain over the course of those 10,000 years to bring out this intentionality.  This suggests that the substance necessary for such behaviors is a mechanism, not some mysterious ectoplasm.

So, no, it isn't only WE as thinking humans, but quite likely much of the animal kingdom.
I don't think you got my drift. Whatever the organism it is that arbitrarily imbues certain states as "information," and others as meaningless, the distinction is still arbitrary. Maybe dogs can do it, maybe chimpanzees can't. But does their lack of interest in any particular set of physical states really mean that that state doesn't represent information? Or is it just that those organisms are geared toward an interest in the states of certain kinds of systems?

This is important, because the last few pages, we've been talking about information processing as an indicator of mind, and I still haven't seen a non-arbitrary explanation of what physical states or systems do/don't represent information. But so far as I can tell, information seems to be something like "useful or interesting physical states or processes," (which implies that the observation by a subjective agent is required in order to say something is information) whereas anything lacking interest is just stuff that happens. But this seems far too subjective to serve as the basis for a theory of mind.
Reply
RE: Seeing red
(January 23, 2016 at 2:31 pm)bennyboy Wrote:
Quote:Or those systems really have it and all your posturing about WE as thinking humans is just special pleading.  "They have intentionality but it's not real intentionality, it's derived."  Yes I have the ability to describe the robots behavior in terms of meaning and intention.  Unless you're just begging the question, there is no difference between the applied use of these terms to describe the robot than there is to describe WE humans.  "The robots have aboutness but it's not the kind of aboutness that I have."  How would you know?  All you're doing is suggesting that we have richer systems for intentionality and meaning in that we can use our concepts to ascribe meaning to the behavior of robots.  That doesn't exclude the possibility that the robots behaviors are deserving of such description.
I'm not talking about having or not having intention.  I'm talking about defining it.  What separates a system with "intention" from any other physical system?  A brain is a collection of physical materials going through their individual processes, and so is a galaxy.  Why do you say one is intentional, and one is just stuff happening?

I already gave you a definition but you seem determined to simply draw this into an argument about semantics.

Intention is what a system has when it represents an isomorphism of the environment and acts on that isomorphism to further its interests in its environment. A galaxy doesn't form an isomorphic representation nor act on that representation.

(January 23, 2016 at 2:31 pm)bennyboy Wrote:
Quote:In an experiment at the Max Planck institute, dogs and chimpanzees were compared for their performance on a simple test.  The test would put a small reward under one of two cups, and the experimenter would point to the cup that concealed the reward.  The task was to find the reward.  The dogs did well on the task, going straight to the cup with the reward, whereas the chimps were not helped by the pointing.  Obviously the dogs understood that the pointing finger was 'about' something in a line from the end of the finger.  They understood the intentionality of the gesture, and no amount of human reinterpreting can take that away from them.  
It seems to me that what we're really talking about is how human-like certain physical structures are.  This is fine when you have an intact human-like organism, but it sheds little light on my post a couple pages ago: on what level of physical organization does mind supervene?

Or perhaps we're talking about how dog-like human structures are. Or how robotic-like certain dog structures are. What makes you think this is a response to your post about supervenience? It's not. But if you ask me I will answer. Mind doesn't 'supervene' on mere matter of any configuration. Mind is a representational system like that of the robots. The human representational system is capable of greater flexibility and complexity, but this is a difference in degree, not kind.

(January 23, 2016 at 2:31 pm)bennyboy Wrote:
Quote:A few additional points.  First, it's clear that if dogs can possess intentionality, it doesn't take much hardware to implement it.  Much as we love our dogs, they aren't exceptional as a species in terms of intelligence.  Second, while the test was not performed with wolves, I suspect wolves would be just as stymied as the chimpanzees; and if not the wolves, at least a recent ancestor of them.  This would show that it takes a relatively short time, 10,000 years, to evolve such intentional behavior.  This suggests that the machinery for such behaviors already exists in species like the wolves or the chimpanzees, simply awaiting for evolution to tease it out into a manifest behavior.  Third, it points to the possibility that intentionality can evolve.  Unless we are postulating doggy souls, some mechanistic change occurred in the brain over the course of those 10,000 years to bring out this intentionality.  This suggests that the substance necessary for such behaviors is a mechanism, not some mysterious ectoplasm.

So, no, it isn't only WE as thinking humans, but quite likely much of the animal kingdom.
I don't think you got my drift.  Whatever the organism it is that arbitrarily imbues certain states as "information," and others as meaningless, the distinction is still arbitrary.  Maybe dogs can do it, maybe chimpanzees can't.  But does their lack of interest in any particular set of physical states really mean that that state doesn't represent information?  Or is it just that those organisms are geared toward an interest in the states of certain kinds of systems?

This is important, because the last few pages, we've been talking about information processing as an indicator of mind, and I still haven't seen a non-arbitrary explanation of what physical states or systems do/don't represent information.  But so far as I can tell, information seems to be something like "useful or interesting physical states or processes," (which implies that the observation by a subjective agent is required in order to say something is information) whereas anything lacking interest is just stuff that happens.  But this seems far too subjective to serve as the basis for a theory of mind.

Again with the dragging the conversation back to petty points. I never pointed to "information processing" as an indicator of mind. I pointed to representational systems as indicator of mind. And by representational system, I don't mean just the robot itself or its computer. By representational system I mean the entire set of the feedback loop which includes the sensors, which feed the computer, which feeds the actuators on the wheels which then feeds back upon the environment, which then feeds back into the sensors. It is an entire economy in which "information processing" is only a component. It isn't just that it is processing information, but that it is using information in a specific way to further its own goals. (Though they need not be goals as such, only that the reactions are self-sustaining; a robot that immediately drives off the road is not really representing the road because its actions in response to its data do not form a loop [or a non-interesting one if you prefer] and thus it is not a 'system'.) Now if goals and interests are too subjective for you, then I fucking give up; you won't be pleased. I could explain how I feel mind is a representational system, but you seem more intent on simply arguing for the sake of arguing than in shedding light on the questions.
[Image: extraordinarywoo-sig.jpg]
Reply
RE: Seeing red
(January 22, 2016 at 8:07 pm)Jörmungandr Wrote: Its instructions and the data in the hardware are very real physical manifestations.
Of what are they manifestations?
Reply





Users browsing this thread: 1 Guest(s)