Posts: 8711
Threads: 128
Joined: March 1, 2012
Reputation:
54
RE: Seeing red
January 24, 2016 at 11:26 am
(This post was last modified: January 24, 2016 at 11:27 am by Neo-Scholastic.)
(January 23, 2016 at 8:17 pm)Jörmungandr Wrote: You obviously have something on your mind, asking me these loaded questions, why don't you spill it? I tire of your leading. You talk about representation and information. Then you get cagey when I ask if the representation is identical to that from which it is made. Seems to me that you cannot describe your robot's behavior in totally physical terms after all!
Posts: 29636
Threads: 116
Joined: February 22, 2011
Reputation:
159
RE: Seeing red
January 24, 2016 at 11:28 am
(This post was last modified: January 24, 2016 at 11:38 am by Angrboda.)
(January 24, 2016 at 7:02 am)bennyboy Wrote: (January 24, 2016 at 3:53 am)Jörmungandr Wrote: I feel like I'm repeating myself and not really making any headway. Is any of this helpful?
I think it's helpful in understanding your view, and I think you've done a good job of describing what KIND of systems you'd call mindful, and I believe it is mostly similar to Rhythm's unless I'm misunderstanding one of you.
I'm not convinced, though, that an intentional system meets my definition of mindfulness-- that is, the subjective experience of qualia. But talking with you guys has given me an idea. I wonder if it might at some point be able to develop an interface between minds to some degree, by attaching their physical mechanisms. I wonder if a direct brain connection might enable me to actually EXPERIENCE someone else's qualia some day.
My views are similar to Rhythm's, but I feel perhaps I've done a disservice to them. Chad originally claimed that objects cannot have intentionality — that they do not refer in the way thoughts do in a human mind. My response was the robotic driver, and my elaboration on representational systems was a way of putting meat on the bones of that concept of intentionality. In response to questions from you, I have elucidated how this non-conscious form of referring might also parallel intentionality in humans. None of what I've shown is an attempt to explain mindfulness, only to explore the mechanistic side of non-mindful intentionality, and how that might play a role in human referring. I have an idea of how such representational systems can be used to explain mindfulness, but I haven't presented it. To that degree I would wholeheartedly agree that what I've presented doesn't explain mindfulness — it wasn't meant to do so.
In regards to the qualia question, I believe that consciousness represents a dynamic model (of what?), a map of a specific territory. Only by sharing the details of that model would you share the same experiences. In other words, consciousness reduces to a modeling system, and it is the parameters of that model which determine the content and state of consciousness. So under my view, a direct connection would get you nothing as it's the data in this modeling system which sets what our experience is like.
Posts: 9147
Threads: 83
Joined: May 22, 2013
Reputation:
46
RE: Seeing red
January 24, 2016 at 12:00 pm
(January 24, 2016 at 11:28 am)Jörmungandr Wrote: My views are similar to Rhythm's, but I feel perhaps I've done a disservice to them. Chad originally claimed that objects cannot have intentionality — that they do not refer in the way thoughts do in a human mind.
Well, I'm kind of in the middle on the semantics in this case. I think you've made a pretty good case for your definition of intentionality; I can definitely see someone saying, "Look! That Google car plans to turn left, but it's stopped at the stop light." However, I also agree with Chad that intentionality is usually a word related to the agency of a conscious organism.
My problem with your definition is that it represents a slippery slope: specifically that if I accept your (and Rhythm's) definitions of words that have traditionally referred to the human experience (what it's like to be a human) rather than to the function of the human organism (the behaviors that are "output"), then what words will I use when I want to express MY views of how things work?
Posts: 29636
Threads: 116
Joined: February 22, 2011
Reputation:
159
RE: Seeing red
January 24, 2016 at 12:19 pm
(This post was last modified: January 24, 2016 at 12:56 pm by Angrboda.)
In my understanding of consciousness, this 'awareness' that we have inside is simply part of a model in which certain representations in that model are fixed as 'phenomenal descriptions' of the consciousness object. That it occurs in our heads is an example of such a description that occurs to cause us to situate our awareness there. There is nothing overseeing this part of our brain telling us we are "inside our heads" and so it isn't doubted, nor does it become a subject of our awareness. It is simply a part of the description of our consciousness. The brain reports this as a fact of our consciousness. Likewise that our linguistic thoughts "occur" in this space is a fixed perception of the system. The linguistic thoughts aren't occurring there, or anywhere, really, but the phenomenal description places them there. Note that this phenomenal model can occur spread out in the brain because it's not really happening "all at one place" - that is just something the brain is telling itself is true, just as it's telling itself that it has a phenomenal 'mind' object.
Posts: 29636
Threads: 116
Joined: February 22, 2011
Reputation:
159
RE: Seeing red
January 24, 2016 at 12:33 pm
(This post was last modified: January 24, 2016 at 1:09 pm by Angrboda.)
(January 24, 2016 at 12:00 pm)bennyboy Wrote: (January 24, 2016 at 11:28 am)Jörmungandr Wrote: My views are similar to Rhythm's, but I feel perhaps I've done a disservice to them. Chad originally claimed that objects cannot have intentionality — that they do not refer in the way thoughts do in a human mind.
Well, I'm kind of in the middle on the semantics in this case. I think you've made a pretty good case for your definition of intentionality; I can definitely see someone saying, "Look! That Google car plans to turn left, but it's stopped at the stop light." However, I also agree with Chad that intentionality is usually a word related to the agency of a conscious organism.
It is, but it's also true that such observations seldom go beyond the intuition that "ideas refer" to an actual attempt to understand what that may mean when we say that our ideas refer; in the intuitional phase, it's just an empty concept. I am attempting to offer an explanation that makes sense of both human referring and machine referring. The best that seems to be offered up in reply is, "No, it's different; that doesn't fit my intuition." Seldom do people tackle the meaning of 'meaning' or what it means when we say that our idea refers.
(January 24, 2016 at 12:00 pm)bennyboy Wrote: My problem with your definition is that it represents a slippery slope: specifically that if I accept your (and Rhythm's) definitions of words that have traditionally referred to the human experience (what it's like to be a human) rather than to the function of the human organism (the behaviors that are "output"), then what words will I use when I want to express MY views of how things work?
I think that maintaining multiple provisional definitions is preferred to inventing a bunch of neologisms. "Traditionally referred" to human experience is folk psychology; it presents an interpretation of experience that is every bit as theory laden and arbitrary as my interpretation of the experience. That isn't arguing to preserve something free of baggage; that wanting to stick to traditionally referring is simply preferring one set of baggage over another. We have minds that are capable of keeping each person's baggage separate. The only danger, I think, comes when someone concludes that a specific view is baggage free and therefore preferred. I think it's good to acknowledge that I am adding my own set of baggage to the question. We must attach the baggage to the same words or we won't know what we are discussing. These are alternate theories* about the phenomena; they're going to use the same words, at least as a starting point. The function is what those words refer to. (For example, my definition of value in the "What is 'objective' value?" thread. I redefine value in order to provide an explanation in terms of function. [see below])
(January 10, 2016 at 9:22 pm)Jörmungandr Wrote: Valuing something is placing it within the context of a plan or purpose. Things are always valuable to be used toward some goal. This is the province of intention. Without some form of goal directing the valuation of the thing, the thing is without value. So no, a thing can't be inherently valuable as value implies the designs of an intentioning agent.
Posts: 29636
Threads: 116
Joined: February 22, 2011
Reputation:
159
RE: Seeing red
January 24, 2016 at 1:27 pm
(This post was last modified: January 24, 2016 at 1:59 pm by Angrboda.)
(January 24, 2016 at 11:26 am)ChadWooters Wrote: (January 23, 2016 at 8:17 pm)Jörmungandr Wrote: You obviously have something on your mind, asking me these loaded questions, why don't you spill it? I tire of your leading. You talk about representation and information. Then you get cagey when I ask if the representation is identical to that from which it is made. Seems to me that you cannot describe your robot's behavior in totally physical terms after all!
I can describe it in multiple ways. That's the point. A mechanistic description of it accounts for the robot's behavior in terms of physical interactions. A functionalist description would incorporate concepts of representation and reference. You appear to be asking me to make a commitment to an ontological stance depending on which form of description I choose. If I choose the mechanistic description, you'll accuse me of not explaining the function. If I choose the functional description, you'll accuse me of using borrowed concepts and not explaining in terms of mechanistic interactions. That's just a disingenuous game. That a level of description is possible says nothing about the ontology of the phenomena. I can provide multiple descriptions without being obligated to solely choose one and only one. Your specific question mixes the two types of description, asking me to provide a mechanistic answer to a functional question. It explicitly puts me in the dilemma just described. I've more than adequately explained how it is both, so I don't get the meaning of your question. "You see no difference between an image and what an image is made out of. They are identical to you?" That's an ambiguous question. Rephrase! What specifically do you mean by 'identical' ?
Posts: 8711
Threads: 128
Joined: March 1, 2012
Reputation:
54
RE: Seeing red
January 24, 2016 at 2:11 pm
(This post was last modified: January 24, 2016 at 2:11 pm by Neo-Scholastic.)
(January 24, 2016 at 1:27 pm)Jörmungandr Wrote: I can describe it in multiple ways...You appear to be asking me to make a commitment to an ontological stance depending on which form of description I choose...I can provide multiple descriptions without being obligated to solely choose one and only one.
If all anyone can offer are various ways of talking about the problem, then they aren't actually proposing anything at all, just metaphors and analogies. This isn't an issue of description. Descriptions are passive take-aways. They do not have the causal power that you seem to think they have.
The question is at hand is reduction. It's about proscription. This is to say "what makes events play out as they do?" and "what makes things what they are?" Attempting to reduce causal power in either direction, towards purely the material or purely the immaterial, always leaves something out. If someone reduces "Hey, Jude" as sound waves that eventually fire neurons, he leaves out the cause of its meaning and affect. If he reduces "Hey, Jude" to just its meaning and affect he leaves out the cause of its actualization.
Posts: 67189
Threads: 140
Joined: June 28, 2011
Reputation:
162
RE: Seeing red
January 24, 2016 at 2:30 pm
(This post was last modified: January 24, 2016 at 2:36 pm by The Grand Nudger.)
(January 24, 2016 at 7:02 am)bennyboy Wrote: (January 24, 2016 at 3:53 am)Jörmungandr Wrote: I feel like I'm repeating myself and not really making any headway. Is any of this helpful?
I think it's helpful in understanding your view, and I think you've done a good job of describing what KIND of systems you'd call mindful, and I believe it is mostly similar to Rhythm's unless I'm misunderstanding one of you.
I'm not convinced, though, that an intentional system meets my definition of mindfulness-- that is, the subjective experience of qualia. But talking with you guys has given me an idea. I wonder if it might at some point be able to develop an interface between minds to some degree, by attaching their physical mechanisms. I wonder if a direct brain connection might enable me to actually EXPERIENCE someone else's qualia some day.
It may not meet your definition, regardless of it's truth. OFC, one might wonder if one's self would meet that definition as well.
An interesting thought (that I think you and I find some common ground on), our brains or minds are largely in some form of actual or philosophical isolation. Our brains may not be using the same "code" after all this time spent alone. It may be that without layers and layers of interpretation...such as spoken words, facial expressions, and body language....the exact experience of a system even slightly unlike yourself, fundamentally, is non-communicable.
The experience of the "other" sends a five digit command line when the users experience calls for six. What to do? My guess, approximate.
I am the Infantry. I am my country’s strength in war, her deterrent in peace. I am the heart of the fight… wherever, whenever. I carry America’s faith and honor against her enemies. I am the Queen of Battle. I am what my country expects me to be, the best trained Soldier in the world. In the race for victory, I am swift, determined, and courageous, armed with a fierce will to win. Never will I fail my country’s trust. Always I fight on…through the foe, to the objective, to triumph overall. If necessary, I will fight to my death. By my steadfast courage, I have won more than 200 years of freedom. I yield not to weakness, to hunger, to cowardice, to fatigue, to superior odds, For I am mentally tough, physically strong, and morally straight. I forsake not, my country, my mission, my comrades, my sacred duty. I am relentless. I am always there, now and forever. I AM THE INFANTRY! FOLLOW ME!
Posts: 29636
Threads: 116
Joined: February 22, 2011
Reputation:
159
RE: Seeing red
January 24, 2016 at 2:58 pm
(This post was last modified: January 24, 2016 at 3:06 pm by Angrboda.)
(January 24, 2016 at 2:11 pm)ChadWooters Wrote: (January 24, 2016 at 1:27 pm)Jörmungandr Wrote: I can describe it in multiple ways...You appear to be asking me to make a commitment to an ontological stance depending on which form of description I choose...I can provide multiple descriptions without being obligated to solely choose one and only one.
If all anyone can offer are various ways of talking about the problem, then they aren't actually proposing anything at all, just metaphors and analogies. This isn't an issue of description. Descriptions are passive take-aways. They do not have the causal power that you seem to think they have.
The question is at hand is reduction. It's about proscription. This is to say "what makes events play out as they do?" and "what makes things what they are?" Attempting to reduce causal power in either direction, towards purely the material or purely the immaterial, always leaves something out. If someone reduces "Hey, Jude" as sound waves that eventually fire neurons, he leaves out the cause of its meaning and affect. If he reduces "Hey, Jude" to just its meaning and affect he leaves out the cause of its actualization.
Reduction isn't proscriptive, it's descriptive. It's finding a lower level description that is isomorphically analogous to the upper level description. You're right, we're dealing with analogs here, that's what reduction is. The theories themselves are what is proscriptive. I don't agree that a successful reduction [always] leaves something significant out; if it does then it's not a successful reduction.
Churchland Wrote:. . . reduction is a relation between theories, and one phenomenon is said to reduce to another in virtue of the reduction of the relevant theories. For example, the claim that light has been reduced to electromagnetic radiation means (a) that the theory of optics has been reduced to the theory of electromagnetic radiation and (b) that the theory of optics is reduced in such it way that it is appropriate to identify light with electromagnetic radiation. Similarly, when we entertain the question of whether light is reducible to electromagnetic radiation, the fundamental question really is whether the theory of optics is reducible to the theory of electromagnetic radiation. Hence, when we raise the question of whether mental states are reducible to brain sates, this question must be posed first in terms of whether some theory concerning the nature of mental states is reducible to a theory describing how neuronal ensembles work, and second in terms of whether it reduces in such a way that the mental states of TR can be identified with the neuronal states of TB.
— Neurophilosophy, Patricia Churchland
Posts: 8711
Threads: 128
Joined: March 1, 2012
Reputation:
54
RE: Seeing red
January 24, 2016 at 3:05 pm
(This post was last modified: January 24, 2016 at 3:05 pm by Neo-Scholastic.)
Have it your way. Now what is on the other side of passive description? It's one thing to say what kinds of things are changing and how they change. It is another thing entirely to say what makes something the kind of thing that it is and why it changes at all. Your philosophy lacks first principles.
|