RE: On naturalism and consciousness
August 17, 2014 at 6:52 pm
(This post was last modified: August 17, 2014 at 7:06 pm by Angrboda.)
Churchland Wrote:Intentionality : Intrinsic and Derived
Consider the following very simple argument :
Recall that for a state to have intentionality is basically for it to have meaning or content (or "aboutness") in the sense described above.
- (1) If the intentionality of propositional attitudes is a physical property, then it should be possible to build a computer whose states have genuine intentionality,
(2) But no computer model that simulates human propositional attitudes will have states with genuine intentionality,
(3) Therefore: the intentionality of propositional attitudes is not a physical property,
The second premise bears the weight of the argument, and it requires a closer look. In its defense, it will be said that however sophisticated the behavioral output of the computer, and however closely it may simulate the behavior of a person, none of its linguistic outputs will really have meaning. To test this claim, suppose there has been built a robot with a behavioral repertoire much like my own, that it is fitted out with sensory receptors for seeing, hearing, smelling, and so forth, and motor effectors. This may require that its internal organization mimic my brain 's organization even down to very low levels, but that is an empirical question to be decided by empirical research. Just assume that our internal systems are sufficiently similar that our behavioral repertoires resemble one another as closely as Popper's and mine do. Suppose also that the robot has a device for emitting sounds, and on a certain occasion after having its visual scanners trained on the morning news broadcast from Houston Space Center, it goes into an internal state identified by "The space shuttle was launched this morning," such that it then wheels into your office and emits the utterance "The space shuttle was launched this morning."
Appearances notwithstanding, continues the argument, the robot's utterance has meaning only because we give it meaning — that is, only because we interpret it as meaning what we mean when we say "The space shuttle was launched this morning." Without our interpretive grace, neither the robot's utterances nor its internal states have any meaning. If we are tempted to say that its internal state was the thought that the shuttle was launched this morning, then we must beware that its so-called thought state represents something about the shuttle only by our so interpreting. Its internal states are mere machine states, representing nothing and meaning nothing. In short, the meaning of its outputs and hence the intentionality of its states is derived from our meaning and the intentionality of our states. In contrast, my thought that the space shuttle was launched this morning has original and intrinsic meaning, rather than derived meaning. That is, the meaning of my internal state is not dependent on or a function of anyone's interpreting it to have a meaning, but is a matter of my meaning something by it. Thus the argument.
This argument has many puzzling aspects, the first of which pertains to what it must be assuming about how the states of biological persons have meaning and intentionality. To clarify this a bit, consider the meaning of my utterance "The space shuttle was launched this morning." What I mean by that, and whether you and I mean the same thing, depends in obvious and intricate ways on what else I believe. For example, if I should believe that a space shuttle is a banana and that to launch something is to put it in the blender, then what I mean is not what Walter Cronkite means when he says "The space shuttle was launched this morning." And what I believe is likewise different. To the extent that your background beliefs are very different from mine, the meaning of the words we use will be correspondingly different. To the extent that we share beliefs, our meanings will be shared. If my background theory of the heavens is Ptolemaic and yours is Copernican, then we shall mean something quite different by "planet." (Ptolemaic theory holds that planets are stars not fixed on the celestial sphere; Copernican theory holds that planets are not stars but cold bodies revolving around the sun, and that Earth is a planet.) What someone means by an utterance depends on the related beliefs that he has, and in turn the content of his beliefs is a function of what he means by certain expressions, in one big ball of wax (Quine 1960). The meaning of an expression for an individual is a function of the role that expression plays in his internal representational economy — that is, of how it is related to sensory input and behavioral output and of its inferential/computational role within the internal economy. Sparing the niceties, this is the network theory of meaning, otherwise known as the holistic theory or the conceptual-role theory. (See Paul M. Churchland 1979. Rosenberg 1974. Field 1977.) Translation is accordingly a matter of finding a mapping between alien representations and one's own such that the network of formal and material inferences holding among the alien representations closely mirrors the same network holding among our own. It is possible that representational economies may be so different that translations are not possible.
Meaning is therefore relational in the sense that what an expression means is a function of its inferential/computational role in the person's internal system of representations, his cognitive economy. This is not to say that an expression has meaning only if someone interprets or translates it as having a particular meaning. However, it does imply that isolated expressions do not somehow sheerly have meaning and that mentality cannot somehow magically endow an utterance with intrinsic meaning. What it does deny is that meaning is an intrinsic feature of mental states and that a state has the meaning it has regardless of the wider representational system. Moreover, it contrasts with a theory of meaning that says that the meaning of a word is the set of objects it is true of and that the meaning of a sentence is to be identified with the state of affairs that makes it true.
With this brief background in theory of meaning, we can return to the central question raised at the outset: can the robot, whose behavior is very like my own, be said to have thoughts with meaning and more generally to have states that represent that p? Now in order to simulate my outer behavior as closely as the premise asserts, the robot will have to have an internal system of representations and computations of a richness roughly comparable to my own. Consequently, it will have elements whose roles have a pattern comparable to the roles played by elements in my internal economy. But if the elements in its internal economy are close analogues of my own, if their roles mirror those in my economy, then what else do they need to have meaning?
To refuse to assign meaning — meaning as genuine as it gets — to the robot's internal states would therefore be to apply a double standard, arbitrarily and to no useful purpose. To bridle here looks like dogmatism. What the robot means by an expression will, as with me, be a function of the role that expression plays in the internal representational economy of the robot. If I can find a mapping between its representational system and my own, then I shall have a translation of its utterances. So much is anyhow all one has to go on in ascribing intentionality to other humans. This is not to say that the robot's internal states have meaning only if I interpret them, for after all, the elements of its representational economy objectively bear the inferential/computational relations that they do to each other, regardless of whether I encounter the robot or not.
If the robot turns out to be a fake, inasmuch as its effects are really produced by a small boy hiding inside, then the intentionality is derived, for the robot has no system of representations of its own in virtue of which inferences are drawn and so forth. On the other hand, if it has a brain of electronic stuff, if its behavioral output is a product of its complex internal system of representations implemented in its brain, then its utterances have meaning in exactly the way mine do.
That the robot looks and smells different from a human, that its "brain" is a structure of silicon pico-chips, is in the end irrelevant to whether or not it believes things, wants things, understands what it hears — in general, whether its states have intentionality. As Douglas Hofstadter has pointed out (in conversation), if simply looking different and having different bodily parts were decisive in determining intentionality, then one could envision much the same argument used persistently by male humans to conclude that females do not have states with original, non-derived, real intentionality. They have merely "femintentionality." Or, obviously, the roles could be reversed, and women could claim that men do not have original non-derived, real intentionality. They have merely "mascintentionality."
Finally, as noted in discussing Popper's argument, the discovery that we humans were actually the product of extraterrestrial intelligence should make no difference to whether we (really) have beliefs, desires, and thought and whether our utterances (really) mean something. The complex inferential/computational relations between representational items in a system are whatever they are, regardless of where the system ultimately came from. But if the antireductionist argument were correct, then such a discovery concerning our origins should make us conclude that we do not have real intentionality after all. Rather, it should make us "conclude," since of course we could not really conclude anything.
Whether a robot has intentional states at all will depend, interalia, on how complex its internal informational network of states is, on its sensory detectors, and on its motor effectors, but it is important to stress two points here. First, there is no criterion for exactly specifying when the complexity is enough and when it is not, or for saying just how the system of representational states must hook up to the world. If the internal informational network is as complex as that enjoyed by an adult human, it is enough, but if it is as simple as that of a sea squirt or a thermostat, it is not enough. The extremes are clear enough, then, but in the middle ground we are less sure. At this stage of our understanding, determining whether something has intentional states is mostly a matter of guessing how like us it is — the greater the resemblance, the more likely we will say that it has intentional states. This is not because we know what it is for a system to represent, but because we don't know, so we proceed with the founding assumption that we are paradigmatic representers. The problem is that in the absence of a robust theory of information processing there can be no precise criterion for exactly how complex an internal network must be and how it must hook up to the world in order to count as bestowing genuine meaningfulness.
Accordingly, the imprecision here is at least in part a function of theoretical immaturity. There are theoretically embedded definitions that specify exactly when something is a protein or an amino acid, because chemical theory is a well-developed theory. Until there is a more developed empirical theory about the nature of representing in organisms, greater precision will have to wait. The fact is, we simply do not know very much about how organisms represent, and what sort of business representing is, or even whether our concept of having a representational system delimits a natural kind. And even when we do know more, imprecision may be our lot, as it is in the case of "species." To force precision by grinding out premature definitions enlightens nobody. Nor, I suspect, will repeated analysis of the folk psychological category of meaning avail us much ...
— Patricia Churchland, Neurophilosophy
There are some caveats that go along with the network theory of meaning (aka Semantic Holism) which bear noting and considering:
Wikipedia: Semantic Holism Wrote:Problems with semantic holism
If semantic holism is interpreted as the thesis that any linguistic expression E (a word, a phrase or sentence) of some natural language L cannot be understood in isolation and that there are inevitably many ties between the expressions of L, it follows that to understand E one must understand a set K of expressions to which E is related. If, in addition, no limits are placed on the size of K (as in the cases of Davidson, Quine and, perhaps, Wittgenstein), then K coincides with the "whole" of L.
The many and substantial problems with this position have been described by Michael Dummett, Jerry Fodor, Ernest Lepore and others. In the first place, it is impossible to understand how a speaker of L can acquire knowledge of (learn) the meaning of E, for any expression E of the language. Given the limits of our cognitive abilities, we will never be able to master the whole of the English (or Italian or German) language, even on the assumption that languages are static and immutable entities (which is false). Therefore, if one must understand all of a natural language L to understand the single word or expression E, then language learning is simply impossible.
Semantic holism, in this sense, also fails to explain how two speakers can mean the same thing when using the same linguistic expression, and therefore how communication is even possible between them. Given a sentence P, since Fred and Mary have each mastered different parts of the English language and P is related to the sentences in each part differently, the result is that P means one thing for Fred and something else for Mary. Moreover, if a sentence P derives its meaning from the relations it entertains with the totality of sentences of a language, as soon as the vocabulary of an individual changes by the addition or elimination of a sentence P', the totality of relations changes, and therefore also the meaning of P. As this is a very common phenomenon, the result is that P has two different meanings in two different moments during the life of the same person. Consequently, if I accept the truth of a sentence and then reject it later on, the meaning of what I rejected and what I accepted are completely different, and therefore I cannot change my opinions regarding the same sentences.
![[Image: extraordinarywoo-sig.jpg]](https://i.postimg.cc/zf86M5L7/extraordinarywoo-sig.jpg)