RE: Seeing red
January 31, 2016 at 1:00 am
(This post was last modified: January 31, 2016 at 1:04 am by bennyboy.)
(January 30, 2016 at 10:48 pm)Jörmungandr Wrote: I think I've been clear throughout that it is the system that matters. You keep taking pieces of the system out and asking me if they're the system? No, the pieces aren't; the system is.Well, if the brain (which is the system in question I suppose, we can talk about DNA later) is the system, and also the idea, then I'm not sure what an idea is at all. I had assumed that discrete configurations of encoded information were the ideas, and that the brain was the context in which ideas would have meaning.
Quote:I think 'idea' under whatever definition is going to be too broad to yield any profitable discussion. It's like trying to have a discussion about animals, without being more precise than 'animals'. There are simply too many differences in the specie to which that refers to group them all under the same rubric. Maybe if you back up a moment and tell us why you want to focus on 'idea'?I'd be as happy understanding intention, or will, or "mind," or consciousness. I suspect that definitions of ANY of them will beg the question-- again, not out of any dishonesty, but because we are reverse-engineering our ideas about mind from the brain. So let's say we say that intentionality is a function of the brain; specifically, it involves an organism feeling its needs or wants, polling the environment to see what's missing, accessing memories in order to build a plan to bring its needs to fruition, and then acting on that play.
This sound fine, and we can identify to some degree the brain parts, states and functions that go into this process, and we expect to have a more comprehensive view of this in the future. However, what ABOUT the brain constitutes ideas, and what just constitutes a complex processing of information?
My other question is how we go from that to a new context (i.e. not the brain or something made by us), and have a good understanding of an alien mind? Do we have to apply human standards of intent? What if we have a cleverly-built machine. Should we allow that its programming is a kind of "memory" in that it allows the machine to act based on past states of systems even though it has no mechism on its own of measuring them?