RE: Do you believe in free will?
March 12, 2012 at 4:55 am
(This post was last modified: March 12, 2012 at 5:09 am by Angrboda.)
@Whateverist
A book I just read (The Belief Instinct, Jesse Bering) suggests that having a theory of mind — a theory of other minds — pops into existence around 5-7 years old (the "Princess Alice" experiments, IIRC).
Daniel Dennett Wrote:Here is how it works: first you decide to treat the object whose behavior is to be predicted as a rational agent; then you figure out what beliefs that agent ought to have, given its place in the world and its purpose. Then you figure out what desires it ought to have, on the same considerations, and finally you predict that this rational agent will act to further its goals in the light of its beliefs. A little practical reasoning from the chosen set of beliefs and desires will in most instances yield a decision about what the agent ought to do; that is what you predict the agent will do.
Daniel Dennett, The Intentional Stance, p. 17
It's been far too long since I read the Intentional Stance to comment further, but I had an interesting conjecture last year which I offer without even pretense that there is any evidence for it; it's simply food for thought. There is a theory, Simulation Theory, that suggests that we reason about other people's mental states by running the scenario on our own hardware and seeing how we would respond.
SEP Wrote:The simulation (or, “mental simulation”) theory (ST) is a theory of everyday human psychological competence: that is, of the skills and resources people routinely call on in the anticipation, explanation, and social coordination of behavior. ST holds that we represent the mental states and processes of others by mentally simulating them, or generating similar states and processes in ourselves: thus, for example, anticipating another's solution to a theoretical or practical problem by solving the problem ourselves (with adjustments for evident disparities, e.g., in skill level). The basic idea is that if the resources our own brain uses to guide our own behavior can be modified to work as representations of other people, then we have no need to store general information about what makes people tick: We just do the ticking for them. Simulation is thus said to be process-driven rather than theory-driven (Goldman 1989).
This idea resonates with me because I can't help but suspect that in activities such as visualizing things in our head, we are in effect "highjacking the normal visual hardware" of the brain to create imaginary stimulus; likely the same with both sound and language. There are theories of dreaming, which I won't advocate, but which suggest that dreaming is a result of neuronal activity in the brain stem during sleep triggering activity in higher cortical centers such as the visual cortex. (Are the thoughts in our heads thought in language, or is linguistic thought just a value added extra. I frequently bring up the question of whether people without language — deaf, dumb, blind, whatever — are likely to experience their consciousness as fundamentally different than our own. Anyway, still not my goal; moving on.)
This is the conjecture. It occurred to me that perhaps we have the telescope the wrong way round. How important is it for an animal to have a theory of itself? My thought was that perhaps the cognitive framework in which intentionality and will arose was first applicable, not to the self, but to the other. Take the evolution of say, fish. If we're a fish looking for a smaller fish to eat, we don't require predictive abilities about our own behavior. But if we want to eat, we need to be able to make predictive guesses about what an unpredictable system — the other fish — might do. For that, we need a cognitive framework where our lack of information is a part of the framework — suggesting the fish may do anything, because our information about it isn't sufficiently rich to do otherwise. Wayne Gretzky said, "A good hockey player plays where the puck is. A great hockey player plays where the puck is going to be." A great predator plays where the fish will be, just as a good fighter pilot shoots where his target will be.
Is it possible that the Intentional Stance (of the other) preceded applying that idea to the self? Anyway. No evidence whatsoever, just an interesting thought.
I'll leave you with a couple examples from the animal world which should give us pause about our special place at the top of the cognitive tree (joking).
They tested an elephant for awareness of self. They painted a white cross on her forehead, and let her loose in a pen which contained a mirror at her head height. When she saw her reflection in the mirror, she repeatedly tried to rub the white mark off her forehead (not the reflection).
Crows are a very intelligent species, IIRC, rivaling chimps in some respects. Crows engage in what is known as caching, in which they hide food so that they can eat it later. However, their caching behavior often includes "pretending to cache food" when they in fact cache it elsewhere. Now, this behavior certainly can evolve without any deliberate conscious intent regarding the mental abilities of other crows, but there are a couple points here worth remarking. First, given the intelligence of crows, it's not far-fetched that they might be reasoning intentionally. But more important, how do we divine a line between cognitive behaviors that are teleological because of our ability to think about ends, from those behaviors that, like the crows, may have evolved as a consequence of the stochastic effects of pseudo-teleological behavior on our ability to survive, prosper and breed?
And finally, there is the example of bottlenose dolphins (I may have the species wrong). There was a pair of dolphins who were trained to perform certain tricks when they were flashed certain visual symbols. One of the symbols was the command to "invent a trick" or improvise. They tested two of the dolphins together, and flashed them the symbol for them to wing it. Both dolphins submergeed momentarily, and then emerged and both did the same trick in synchronization. The level of cognitive development necessary to communicate and plan as they did must surely be considered in the ballpark of our own.
ETA: There are studies that seem to indicate that neurological events which signal, say, the movement of a finger precede our conscious intent and awareness of intending to move the finger. (Approx. 150-300 millliseconds, IIRC.) Some aspects are controversial, but it dovetails with another contemporary theory, that of the adaptive unconscious (see Wikipedia). This idea is that our decisions occur below the level of consciousness, and that consciousness simply acts as a reporter, confabulating an explanation for the event it didn't originate. (I suggest looking into the term the adaptive unconscious, confabulation, and Sperry and Gazzaniga's work on split-brain subjects. No time to trace down references, but Jon Haidt's work is of relevance as well. And to a lesser extent, that of Daniel Kahneman; not speaking of Prospect Theory, aka Error Theory, which is unrelated.)