RE: Seeing red
January 23, 2016 at 3:28 pm
(This post was last modified: January 23, 2016 at 3:57 pm by Angrboda.)
(January 23, 2016 at 2:31 pm)bennyboy Wrote:Quote:Or those systems really have it and all your posturing about WE as thinking humans is just special pleading. "They have intentionality but it's not real intentionality, it's derived." Yes I have the ability to describe the robots behavior in terms of meaning and intention. Unless you're just begging the question, there is no difference between the applied use of these terms to describe the robot than there is to describe WE humans. "The robots have aboutness but it's not the kind of aboutness that I have." How would you know? All you're doing is suggesting that we have richer systems for intentionality and meaning in that we can use our concepts to ascribe meaning to the behavior of robots. That doesn't exclude the possibility that the robots behaviors are deserving of such description.I'm not talking about having or not having intention. I'm talking about defining it. What separates a system with "intention" from any other physical system? A brain is a collection of physical materials going through their individual processes, and so is a galaxy. Why do you say one is intentional, and one is just stuff happening?
I already gave you a definition but you seem determined to simply draw this into an argument about semantics.
Intention is what a system has when it represents an isomorphism of the environment and acts on that isomorphism to further its interests in its environment. A galaxy doesn't form an isomorphic representation nor act on that representation.
(January 23, 2016 at 2:31 pm)bennyboy Wrote:Quote:In an experiment at the Max Planck institute, dogs and chimpanzees were compared for their performance on a simple test. The test would put a small reward under one of two cups, and the experimenter would point to the cup that concealed the reward. The task was to find the reward. The dogs did well on the task, going straight to the cup with the reward, whereas the chimps were not helped by the pointing. Obviously the dogs understood that the pointing finger was 'about' something in a line from the end of the finger. They understood the intentionality of the gesture, and no amount of human reinterpreting can take that away from them.It seems to me that what we're really talking about is how human-like certain physical structures are. This is fine when you have an intact human-like organism, but it sheds little light on my post a couple pages ago: on what level of physical organization does mind supervene?
Or perhaps we're talking about how dog-like human structures are. Or how robotic-like certain dog structures are. What makes you think this is a response to your post about supervenience? It's not. But if you ask me I will answer. Mind doesn't 'supervene' on mere matter of any configuration. Mind is a representational system like that of the robots. The human representational system is capable of greater flexibility and complexity, but this is a difference in degree, not kind.
(January 23, 2016 at 2:31 pm)bennyboy Wrote:Quote:A few additional points. First, it's clear that if dogs can possess intentionality, it doesn't take much hardware to implement it. Much as we love our dogs, they aren't exceptional as a species in terms of intelligence. Second, while the test was not performed with wolves, I suspect wolves would be just as stymied as the chimpanzees; and if not the wolves, at least a recent ancestor of them. This would show that it takes a relatively short time, 10,000 years, to evolve such intentional behavior. This suggests that the machinery for such behaviors already exists in species like the wolves or the chimpanzees, simply awaiting for evolution to tease it out into a manifest behavior. Third, it points to the possibility that intentionality can evolve. Unless we are postulating doggy souls, some mechanistic change occurred in the brain over the course of those 10,000 years to bring out this intentionality. This suggests that the substance necessary for such behaviors is a mechanism, not some mysterious ectoplasm.I don't think you got my drift. Whatever the organism it is that arbitrarily imbues certain states as "information," and others as meaningless, the distinction is still arbitrary. Maybe dogs can do it, maybe chimpanzees can't. But does their lack of interest in any particular set of physical states really mean that that state doesn't represent information? Or is it just that those organisms are geared toward an interest in the states of certain kinds of systems?
So, no, it isn't only WE as thinking humans, but quite likely much of the animal kingdom.
This is important, because the last few pages, we've been talking about information processing as an indicator of mind, and I still haven't seen a non-arbitrary explanation of what physical states or systems do/don't represent information. But so far as I can tell, information seems to be something like "useful or interesting physical states or processes," (which implies that the observation by a subjective agent is required in order to say something is information) whereas anything lacking interest is just stuff that happens. But this seems far too subjective to serve as the basis for a theory of mind.
Again with the dragging the conversation back to petty points. I never pointed to "information processing" as an indicator of mind. I pointed to representational systems as indicator of mind. And by representational system, I don't mean just the robot itself or its computer. By representational system I mean the entire set of the feedback loop which includes the sensors, which feed the computer, which feeds the actuators on the wheels which then feeds back upon the environment, which then feeds back into the sensors. It is an entire economy in which "information processing" is only a component. It isn't just that it is processing information, but that it is using information in a specific way to further its own goals. (Though they need not be goals as such, only that the reactions are self-sustaining; a robot that immediately drives off the road is not really representing the road because its actions in response to its data do not form a loop [or a non-interesting one if you prefer] and thus it is not a 'system'.) Now if goals and interests are too subjective for you, then I fucking give up; you won't be pleased. I could explain how I feel mind is a representational system, but you seem more intent on simply arguing for the sake of arguing than in shedding light on the questions.