(October 23, 2013 at 2:34 am)genkaus Wrote:(October 23, 2013 at 2:04 am)FallentoReason Wrote: It's a tricky thing to define for sure. I'd say the entity needs to show emotions/feelings/instincts.
Given that an entity can be a conscious entity without having sentience - why do you consider the existence of emotions/feeling/instincts as necessary for assigning meaning?
There's two questions in one here. Firstly, I'd say that sentience is necessary for the entity to be conscious e.g. I don't consider a flower to be conscious. Secondly, of course sentience has nothing to do with meaning. I don't know if you're conflating the two on purpose or not, but I'll give you the benefit of the doubt. We were simply delving into what consciousness is here, not meaning etc.
Quote:(October 23, 2013 at 2:04 am)FallentoReason Wrote: "The only entities conscious of them were the next program".
I love the word choice here. It's clear as day that you're begging the question.
Am I? Or am I using it in such a manner that it doesn't beg the question?
The word conscious normally is used to describe either biological or obviously self-aware entities - but the limits of the word are not defined. The simplest explanation of consciousness - without any dualistic baggage - would be "X is conscious of Y when some information from Y is received and processed by X". The behavior of the programs fits this category.
Is a soccer ball concious of my foot when the soccer ball moves as a result of causal relations ("data processing", since all data processing can be reduced to causal relations) between my foot and it?
Quote:(October 23, 2013 at 2:04 am)FallentoReason Wrote: Anyways, to the above I shrug my shoulders. Where one program ends and the other starts is an arbitrary boundary. What if we had a Mega Program that contained the algorhythm of all three? Your non-issue would dissolve and we would be at square one.
Actually, in that case the issue would be compounded. If we have a mega-program the sections of which are exchanging information with each other, then I could make an argument that that program has a degree of self-awareness.
Then clearly the amount of programs is trivial, so let's go back to just examining one program.
Why would you assume causal relations within the algorithm of one program equates to the program being self-aware? What is it about electrons moving through copper that tells you these electrical states that make up the algorithm are "self-aware"?
And on another note... I'm curious... would you feel sorry for a computer if you chopped it in half with a chainsaw? Why? Why not?
Quote:(October 23, 2013 at 2:04 am)FallentoReason Wrote: Bottom line here is that it doesn't matter how many programs or how long the algorhythms are, you still have a whole bunch of physical causal relations that take something in and spit something out without ever having to give it meaning.
But they do have to give it meaning - that was my point. The user is not aware of intermediate outputs but there is a set of abstract categories which can assign meaning to them and that is done by the programs.
Intermediate outputs are just as trivial as the boundaries you assigned between a potential Mega-Program, since said outputs are dependent on where these trivial boundaries are.
How does the program give *anything* meaning? How does copper wire with electrons running through it make meaning of an electrical state in the motherboard that represents the e.g. bending moments in the structure? We're still at square one: physical things somehow being *about* something else.
"It is the mark of an educated mind to be able to entertain a thought without accepting it" ~ Aristotle