(October 23, 2013 at 3:00 am)FallentoReason Wrote: There's two questions in one here. Firstly, I'd say that sentience is necessary for the entity to be conscious e.g. I don't consider a flower to be conscious. Secondly, of course sentience has nothing to do with meaning. I don't know if you're conflating the two on purpose or not, but I'll give you the benefit of the doubt. We were simply delving into what consciousness is here, not meaning etc.
Something seems to have gotten lost here. We are talking about meaning.
You made the statement "A conscious entity is required to assign meaning".
My question to this was "At what level its consciousness must be for it to assign meaning".
Your reply was "It needs to show emotions/feelings/instincts." - which indicates a sentience level of consciousness.
To which I asked "why that specific level?"
Now, you are saying that sentience is required for the entity to be conscious and that sentience is irrelevant to assigning meaning?
Secondly, why wouldn't you regard plants as conscious? A sunflower seems to conscious of sun's position. A Touch-me-not seems to be conscious of when someone touches it. A Venus Flytrap is conscious of when a fly has entered it. What's the difference between this and what you call conscious behavior?
(October 23, 2013 at 3:00 am)FallentoReason Wrote: Is a soccer ball concious of my foot when the soccer ball moves as a result of causal relations ("data processing", since all data processing can be reduced to causal relations) between my foot and it?
You are making the same mistake again.
Assignment of meaning can be reduced to data processing - that does not mean all data processing result in assignment of meaning.
Similarly, data processing can be reduced to causal relations - that does not mean all causal relations result in data processing.
Which is why the soccer ball is not conscious of your foot.
(October 23, 2013 at 3:00 am)FallentoReason Wrote: Then clearly the amount of programs is trivial, so let's go back to just examining one program.
Why would you assume causal relations within the algorithm of one program equates to the program being self-aware? What is it about electrons moving through copper that tells you these electrical states that make up the algorithm are "self-aware"?
And on another note... I'm curious... would you feel sorry for a computer if you chopped it in half with a chainsaw? Why? Why not?
I don't assume that causal relations within an algorithm equal self-awareness. Self-awareness is a specific form of data-processing where the processes within the entity serve as input for yet other functions within it. If such a mechanism exists, then I would regard it as self-aware. As for feeling sorry for the compute - no I wouldn't. But then I don't feel sorry for the chicken that is chopped in half for my dinner - so don't read anything into it.
(October 23, 2013 at 3:00 am)FallentoReason Wrote: Intermediate outputs are just as trivial as the boundaries you assigned between a potential Mega-Program, since said outputs are dependent on where these trivial boundaries are.
Except those boundaries are defined by the scope of each software - so, not trivial at all.
(October 23, 2013 at 3:00 am)FallentoReason Wrote: How does the program give *anything* meaning? How does copper wire with electrons running through it make meaning of an electrical state in the motherboard that represents the e.g. bending moments in the structure? We're still at square one: physical things somehow being *about* something else.
The same way conscious entities assign meaning - which you seem to accept.
Abstract categories are stored in our neural pathways.
Abstract categories are stored as specific electrical states in hardware.
Our sensory data (neural impulses) is processed according to these categories within neural circuitry representing the brain function.
Computational data (electrical) is processed according to these categories within electrical circuitry representing a program.
Data gets reclassified (assigned meaning) as a result.
Data gets reclassified (assigned meaning) as a result.
The biggest difference between these two is that while humans are capable of developing their on abstract categories, computers have to be pre-programmed with them - for now.