(January 19, 2022 at 2:19 pm)emjay Wrote:(January 19, 2022 at 10:21 am)polymath257 Wrote: Well, an example that isn't motivated by causing suffering is giving an anesthetic to an animal during surgery. Suppose a limb need to be removed. Descartes would say that it is acceptable to hack through it because the animal feels no real pain. I would say that an anesthetic would be required unless there is a good medical reason otherwise. Not all pain is inflicted with the intent to cause suffering.
Fair point. And it's not the only example of potentially faulty assumptions about the presence/absence of phenomenal consciousness... I read recently about medical cases where due to brain structure, certain people were believed not to be conscious, but then exhibited signs of consciousness... showing up only that our knowledge of the neural correlates of consciousness is not complete, and therefore to make medical assumptions based on it is dangerous. So I agree these sorts of assumptions, about the presence or absence of conscious experience, can have serious real world implications, and is again why I would say the only ethical thing to do... on the principle of trying to prevent even the risk of causing harm... is in the presence of any such doubt about that, to err on the side of assuming the presence of phenomenal consciousness. Granted the structural issue is a different one from these dualistic/PZ issues, but there is crossover in the sense of making assumptions about the presence or absence of consciousness, in such a way that can have serious real-world effects.
Quote:I do think that one crucial aspect for consciousness is interaction with a changing environment *and* maintaining a record of internal states in memory. So a simple switchboard would not have the memory aspect.
As I said, that was just meant as a rough analogy of what I meant. A computer with discrete memory locations is not really a good analogy for the brain nor is a switchboard since neurons are basically just nodes in an ever-dynamic network, each essentially 'learning' and recalling in the same process... and it was that dynamic connectivity and learning that I was trying to emphasise with the example of a switchboard... a flux of ever changing neural dynamics of activation and connectivity/association, where that dynamic change in connectivity/association, is learning.
Quote:And I also think we need to get a better vocabulary. If I out a noxious chemical in with a bacterium, it will react and move away. That *is* a type of awareness. But I suspect it is quite different than what humans have simply because of the differences in complexity of the information processing. Plants *do* respond to changes in their environment, even releasing chemicals 'informing' other plants of dangers, leading them to react in ways that are protective. That is *also* a type of awareness, but it seems to be significantly different than the other two types.
There is a sense in which *everything* that is alive maintains information about its internal state and its environment in order to maintain homeostasis. That is a type of awareness.
In a different direction, even someone who is asleep (unconscious?) is processing information from the environment and can wake up if something unusual happens. That is also a certain type of awareness. There other other states like 'conscious sedation' that also seem relevant.
We need a fuller vocabulary to discuss the similarities and differences between all of these different types of awareness. This lack of vocabulary and precision makes the discussion of consciousness very difficult because we cannot point to examples distinguishing the process we want to talk about.
I think this *is* a problem in the study of consciousness, but again, it is a 'soft' problem, not a hard problem.
I see the problem of consciousness as a question about information processing and that alone: how and why do we become 'aware' (have information) of something?
I think in practical terms we probably have very similar goals and outlooks on this question... ie we're both interested in what I called 'neural consciousness' (as opposed to phenomenal consciousness)... ie the neural correlates of consciousness... from the point of view of information processing etc. Likewise, I think we're both more interested in the 'soft problems' of consciousness than the 'hard problem'... like you, a complete and consistently reliable predictive mapping of neural states to conscious states would be as close to a complete explanation of consciousness as I think we could ever expect to get... I don't know if I would go so far as to say, as you seemed to earlier, that that consistent correlation *is* causation from a physics point of view, and therefore that there is no further question beyond that from a physics point of view; but then, I'm not a physicist/scientist, but you are presumably?
In contrast, to that same issue, I think of the hard problem, as in an explanation of how something physical like the brain gives rise to something apparently immaterial like phenomenal consciousness, as something that can only really be addressed with philosophical speculation, not science... so despite us getting there by different routes... it looks like roughly we are arriving at the same destination; you think there is no causation question to answer beyond that consistent correlation from a physics point of view? whereas I think, possibly due to dualist baggage skewing my thinking on these types of questions, that there *is* a question to answer but that science can't answer it, only philosophical speculation... but that being the case, I see it as something for idle speculation only and not something to as it were, put all my eggs in that basket; ie I think to do so, either individually or as a population, is basically a fool's errand that only serves to keep the whole issue of consciousness shrouded in mystery... gravitating around a question that we can never really expect to answer beyond unverifiable speculation... stifling real progress into 'softer' problems of consciousness, such as comprehensively mapping neural correlates of consciousness, which cumulatively I think will amount to as full an explanation of consciousness as we can ever realistically expect to get... with that explanation not really lacking in any practical meaningful way, but the difference between us seeming to be that with you that would be a complete explanation, but for me, that would still leave this philosophical question untouched. Basically if I'm understanding you correctly, from your physics point of view, you would write off this entire paragraph as at best irrelevant, or at worst, an erroneous/skewed perception of the problem? Or an unneeded, superflous problem? On account of seeing the buck of causation stopping at that reliable and consistent correlation? Not angry, just want to be clear if that's what you're saying.
Well, I see such philosophical speculation as being on par with fiction or fantasy. It is fun is discuss over drinks, but ultimately meaningless.
Quote:So yeah, I definitely agree I think there is a language barrier here so to speak, where we need a better vocabulary about these sorts of questions, because as it stands it's well open to conflation and misunderstanding.
Like, how you're defining awareness here, I'm not sure I understand. Basically does awareness imply phenomenal consciousness to you, or can a distributed neural state, in both time and space, but with nothing else involved (ie no phenomenal experience of any kind), be considered awareness?
Well, yes, I would consider it as a low level of awareness. it detects things in the environment and reacts to them.
But I am also very unclear about what, precisely, constitutes 'phenomenological consciousness' and how that would be distinguished from the information processing of awareness. In other words, when you say there is 'nothing else involved', it comes across as saying there is molecular motion with no temperature.
Quote:I don't disagree that it's information processing, but whether it's what I'd think of as awareness, I'm not so sure. It's awareness in a sense I'd say... in a kind of abstract global sense that it is information that will drive the behaviour of the organism, whether there is experienced phenomena or not (back to PZ assumptions I know), but just want to be clear whether you mean it in a phenomenal sense or not. This seems to be the crux of the communication problem between us... I have these potentially dualist assumptions, splitting consciousness into two aspects, phenomena and physical/neural, whereas you don't see it that way at all, or believe they are truly inseparable to such a degree that splitting them apart even hypothetically or conceptually, is erroneous thinking? Basically, do you make any distinction between the presence and the absence of phenomenal experience in your scientific thinking? I still think it might be difficult to get beyond this language barrier, if the two are conceptually inseparable to you. I think I can get as far as saying awareness is a conceptual middle ground representing the abstract/summary information encoded by that distributed neural state at any given moment, but whether I can equate that with phenomenal consciousness or not is a different matter... ie I'm not sure, but it looks like you may be saying they are the same thing? If so, that's again something we might differ on, but it might be at least a step towards a common ground between our different perspectives.what phenomenological consciousness is supposed to be if it isn't awareness.I'm not sure I understand what phenomenological consciousness is supposed to be, frankly. How does 'consciousness of seeing red' differ from 'seeing red'? I just don't see a difference.
Perhaps the closest I can come is with certain optical illusions, where you have a single image and it 'flips' back and forth, possible with some control. I would imagine that each 'image' would be a different 'phenomenological experience', right? And we know the difference is just how the brain goes back and forth between 'interpretations'. is there something more to it?
So here is a related question: how do I know whether or not I am a zombie?
Maybe what I 'think' is 'conscious experience' isn't *actually* conscious experience and is, instead, what is being described as 'in the dark'. How can *I* tell? I certainly see things and hear sounds and feel touch. But that would be true whether or not I have phenomenological experiences, right? I would still be aware of those things. And I would be aware of being aware of them.
So what is the difference between awareness (as in information) and phenomenal consciousness?
When it is described as 'shiny', that is a visual perception and I can distinguish shiny things from non-shiny things. But so would a zombie. So saying that consciousness is 'shiny' is a metaphor, right? But what is it a metaphor of, precisely?