(August 26, 2017 at 9:13 am)Khemikal Wrote: If something doesn't kill us and confers advantages it's likely to persist and spread. It's not really that things come about for a reason - that would be teleology. They persist for a reason, or do not persist for a reason - that's evolution.
I never said that things come about for a reason.
(August 26, 2017 at 9:13 am)Khemikal Wrote:(August 26, 2017 at 4:39 am)Mathilda Wrote: It isn't actually how evolution works. A plant has no ability to grow a brain, liver or kidney even by accident. There is just no physical mechanism in place where those plant genes could be expressed as a such a complex organ. It inhabits the completely wrong part of evolutionary space for that. That's not to say that a plant could not evolve them given sufficient time if the environmental pressures were there.The evolution of complex organs in plants would be deleterious, not advantageous.
Unless you have completely redefined what is meant by a brain, liver and kidney that is.
Ah right so now you've changed what you're talking about and referring to the evolution of complex organs in plants. That's not what I was arguing against. To remind you, you said this:
(August 25, 2017 at 10:59 am)Khemikal Wrote: If you consider how each rep is embodied (a subject I know you enjoy) the differences in how they achieve the same advantage may be illuminated. Plants, for example..cant run...and so, organs are costly. A plant with a brain like ours is making one hell of a biological gamble.
By referring to it as a biological gamble you made it sound like you were referring to the possibility of a plant growing a whole brain when previous plants were unable to. If this is what you mean then you are wrong because as I pointed out, a plant cannot suddenly grow a brain regardless of whether it is deleterious or not. If this is not what you meant then you were wrong to refer to it as a biological gamble, because to evolve to the point where it would be capable of growing a brain there would have to have been an evolutionary pressure to develop the mechanisms by which it would be able to grow a brain, in which case it wouldn't be that much of a gamble.
(August 26, 2017 at 9:13 am)Khemikal Wrote: Some of them provide abilities that fit the basic metrics of cognizance. They use these adaptations to cognizance to deal with the same problems that we use our brains to deal with. We posit that our consciousness confers advantages x,y,z...but a plant uses those other things, it's non-conscious analogs, to secure the same x, y, and z's. This is what I've been referring to when I propose that the advantages of what we take to be consciousness are not overwhelmingly or uniformly different from the advantage of things we take to be non conscious, that the reason for the persistence of consciousness as cognizance and "other x" as cognizance are widely the same. No specific and absolute advantage is presented by one, over the other. Just two ways to skin the same cat.
Take what I said earlier about emotion narrowing the range of actions likely to occur and cognition widening the range. This happens over the course of an agent's lifetime. Yet you can also make the same argument comparing cognition and instinct. Instinct is learnt over evolutionary time, cognition allows a single agent to learn faster. Cognition provides benefits for the agent in exploiting their immediate environment whereas evolutionary strategies only benefit the offspring in exploiting their environmental niche.
You can also make the same comparison with general intelligence and evolution. An intelligent species of predator or prey hunt / escape more often and more effectively than if the species improved over time through evolution. This is because evolution is a very slow process. Intelligence is many orders of magnitude faster. Seen in terms of self organisation, there is a thermodynamic pressure on both the development of intelligence and evolution to maximise entropy, and intelligence does this faster, more effectively and efficiently.
So you argue that any advantages that consciousness can bring an agent can also be achieved via other means, and I don't necessarily deny that. But I would argue that consciousness brings about those advantages faster. In terms of self organisation, intelligence and consciousness allow the thermodynamic gradient to be minimised to a greater degree than can be achieved through the evolutionary process alone.
(August 26, 2017 at 9:13 am)Khemikal Wrote:(August 26, 2017 at 4:39 am)Mathilda Wrote: Is this a useful concept? Is there any evidence that we need to differentiate between 'experience in' and 'experience of' ? Is there an objective way of determining whether an organism is 'in the sensation' rather than having an 'experience of it'? Can a human with brain damage or a neurodegenerative disease have experience of pain when they would normally have experience being in it? If not then it sounds like a pointless philosophical concept to me.Is it a useful concept, sure, in that in gives specificity.
Specificity of what? Show me how it is a useful concept.
(August 26, 2017 at 9:13 am)Khemikal Wrote:(August 26, 2017 at 4:39 am)Mathilda Wrote: I'd argue then that sea rocket is developing the first rudimentary levels of consciousness. Any organism that needs a sense of self and a sense of others, whether it is because it is predator, prey or part of a pack or colony that has to co-ordinate its actions, will benefit from consciousness.Sea rocket isn't alone in that - it's only alone in it's merciless kin selection. -All- plants have absurdly well developed sensory apparatus, a rich sensual life, all plants benefit from a way of handling and using the information that their sensory apparatus creates - but none have brains and perhaps none have consciousness of any sort. There's really nothing rudimentary about them, at all.
Why assume that the ability to process needs to be centralised in a brain? Two thirds of the neurons of an octopus are found in its tentacles. Only one third is found in its brain. I personally grow carnivorous plants so I am well aware of how some plants can sense and move very fast, yet also avoid being fooled by extraneous sensory stimuli.
(August 26, 2017 at 9:13 am)Khemikal Wrote: An unrooted parasitic germinate, with no central nervous system and nary a single complex organ, that's..on average, 5cm long...is both mobile and capable of performing action selection. The question becomes, does consciousness actually help it, or would it, in any way, or is consciousness an elaborate story we tell ourselves about how parasitic vines pick solanum out of a crowd and inch their way towards it
If the unrooted parasitic vine was more effectively able to act in co-ordination with fellow vines, or the Solanum was actively trying to avoid it by deploying a trap that relied on the vine's most likely response, then yes, I could see an advantage in the vine evolving consciousness. But the interaction is not that complex so there is no pressure for consciousness to evolve. Using plants as an example is a false equivalence.
(August 26, 2017 at 9:13 am)Khemikal Wrote:(August 26, 2017 at 4:39 am)Mathilda Wrote: So getting back to the original point I was making:
2) Why should this functionality be provided at the quantum level rather than at the level of a network of neurons?
My answer to your question isn't that consciousness should be provided at any specific level, simply that it isn't at some, and we have some indications as to why. Similarly, there's no reason that it should be provided at the level of neurons (or whatever the fuck plants are doing at any given moment)....but, in both cases, it either is or possibly is or something like what we call consciousness is present in both (huge caveat for misapprehensions we have about it). What I wonder, is what it is that both plants, and brains are doing. Are they achieving the same end in a meaningfully different way, or are they achieving the same end in the same way with meaningfully different architectures. What is the shared unit or method of cognizance, if there is one...and if there isn't...just how many ways -are- there to skin that cat?
That wasn't really what I meant by my question so I shall rephrase it. I can see now how it was ambiguous. Either way we are probably in agreement that systems at the quantum level are probably not conscious. So rephrasing the question:
What reason is there to believe that quantum mechanics are required to explain consciousness?
I see none at all.