(February 13, 2017 at 8:56 am)Mathilda Wrote:(February 13, 2017 at 7:26 am)emjay Wrote: I've always wondered, is your perspective just a pragmatic thing because you're a scientist?... in which case I get it and share it for the most part in that the continued and perfect (imo) correlation between neuroscience and consciousness indicates they are one and the same... but nonetheless do you never, even just fleetingly or irrationally, wonder about the phenomenal nature of consciousness?
I think I have a pragmatic view of it because I actually think in terms of how to build an artificial intelligence. Everything has a use. Take emotions for example. People assume that they are a burden, but we need them in order to function. We evolved them for a reason and an AI agent will need them too.
If you want to build an agent that can co-operate with other agents then it will need some way of representing itself in its internal model in relation to others. In other words some a personal identity. Empathy allows animals to work together better as a pack and to learn from each other. That means processing visual stimuli and simulating what would happen if it was applied to oneself. And then there is the other function of self awareness, the check and balance of cognition running alongside emotion. That little voice that knows that you are angry for example and thinks, do you really want to be doing this. All this adds utility to the agent and allows it adapt better.
Asking yourself what it means to sense red is a waste of time. You'll never be able to answer it, and if you could, you wouldn't be able to demonstrate it and it probably wouldn't tell you anything interesting anyway. But that's what people are doing and that's why they think it's a hard problem. As a designer of AI, I know how my agents sense the colour of red and what effect it has on them.
I do understand what you're saying... and agree with it (even though it may look like I don't). I think the same way... functionalist epiphenomenalism and with every aspect of consciousness both being essential to the system and with a neural representation... so as you think about it in terms of AI, I just think about it in terms of NNs and NN states and that's my primary way of thinking... relating every experience to NNs. So when I said I considered consciousness to be superfluous I only meant the phenomenal side of it... in my view, everything in consciousness can be (and must be... because everything you can 'notice' in consciousness (which is pretty much everything) you can refer to and label (ie associate with) therefore everything noticeable must have a neural representation) represented neurally. So what I'm saying is in my view, what we term consciousness is represented in the network - every aspect... emotion, qualia, perception, cognition etc - so my only question question was why there was phenomena when all could be achieved without (ie a hypothetical true philosophical zombie would be exactly the same as me... it would still be typing away on a forum, still represent thinking and feeling... it just would not have a phenomenal representation of all those things, only a neural one).
And I also agree with you on the seeing red question. It is pretty much by definition (of epiphenomenalism) that it is a waste of time thinking about it because as it, under that view, has no causal affect on the material, no causal link can ever be proved between neural and phenomenal representation. So I just conclude they are both different sides of the same coin, and treat them as the same thing... talking and thinking about neural and phenomenal representations truly interchangeably - all the time in daily life - and with each informing and predicting about the other. That's where my confidence in this point of view lies but nonetheless, the existence of phenomenal representations is still a curiosity. Scientifically pointless to consider, maybe, but philosophically and existentially, not so easy to truly let go of... at least not all the time.