(February 15, 2016 at 8:55 am)Rhythm Wrote: Hehehe, yeah, it's a tangent, let's bring it back round and in, shall we? Demystifying those circuits, demystifies our experience. It may not explain it, but understanding how those systems work can explain how "stuff" -can- produce the effects we attribute to mind, even if we do it Some Other Way, even if, ultimately, it's other stuff or different principles in action, in our case.
We might not be able to say "this is how we see red" - but we can describe how a computer sees red..and exhaustively so, all the way down to the quirks of a program counter's mechanical implementation.
Consider, for example, the way your circuits simply accept an input as true. That's required, mechanically, to do logic - a quirk just like the placement of mux in-circuit. It might seem puzzling that a logic machine can produce illogical statements, conclusions...but they clearly can..if and when that true input is standing in as the variable for whether or not a truth statement X -exists-, is stored in memory. The system takes it to be true, it does logic, and all is for naught at the end because it produces a gibberish statement such as "All birds are made of iron" -is true-.
We have a similar habit, in that we often assess the truth value of a claim by what we have stored in memory, regardless of the truth value of the statement in relation to some exterior or even objective standard. We might have seen a 6 inch crappie, and, when asked "is this a big crappie" we're answering, essentially, whether or not that crappie is bigger than the crappie of memory - our answer may not be representative of the size of crappie as a species, but it -will- be representative of those statements regarding crappie and size taken as true by us, based upon those inputs we are confined by in assessing anything.
A comparator can answer that question just as easily as we can, and it will succeed or fail in answering those questions by varying standards for precisely the same reasons and in precisely the same scenarios. In both cases the following occurs, if the crappie is larger than the crappie of memory, the conclusion is true, yes...that's a big crappie. If it's smaller, the reverse. If the crappie of memory is particularly large, then our statement "no, that is not a big crappie" is true referent only to the system.....outside of that the crappie in question might be very large indeed. If our crappie of memory is very small.....again the reverse. Why?
Interesting example

To put what I'm saying into context, any time you identify something in life you follow the same process. In order to identify something you have to already have a representation of it in your mind. And in this sense a representation is just a binding neuron... a means of associating a group of features into one unit... in other words a means of categorising something by its features. You notice one feature of something. If it's enough to trigger the binding neuron... just get it into the 'maybe' level of activation then bidirectional feedback will start biasing the other associated features, making them easier to activate. Activate the next feature... the binding neuron activation goes higher, feedbacks more, biases more. And also, as this feedback comes down, it not only biases some neurons but it actually takes others above threshold and activates them. These neurons that come on during the settling process are what I believe are assumptions... things you take as true without necessarily being aware of them or where they came from

If on the other hand there is no representation of something in your mind... ie it's a novel object/place etc... you've got to wait until a neuron takes up the job of associating its features before you can start having expectations about it (where 'expectation' is represented by the bidirectional biasing feedback). Which will happen, over time, given enough repeated presentations of the novel environment/object/place etc. Each pass of learning will extract as it were the stable features of that environment/object/idea etc and associate them. The learning process is complicated but in case it means anything to you, or is of any interest to you, it's Hebbian model learning, and the way Emergent models it is with an algorithm called CPCA - Conditional Principal Components Analysis.
The point is that you identify something by the presence of its features, and the level of 'truth' is a measure of how activated the detector of those features is. But the same process works at any level of abstraction... so identifying objects and lower right up to abstract ideas and beyond. And IMO, high activation of such contexts - indicating the presence of activated features corresponds to the feeling of truth/belief/real. But, and this is a big but, I'm not talking about individual binding neurons so much as entire contexts... ie related sets of associations... self-sustained feedback loops of bidirectional activation. This is how I see it: I think there is a limit to how many contexts can be active in your mind at once (I call it your 'mindscape') and I think it's the magic number 7+ or -2 as I learned about in psychology. That is to say, how many completely unrelated things you can remember (ie if they were related they'd be part of the same context). I don't know if you had 'The Generation Game' over there with Bruce Forsyth but in it was the prize conveyor belt where various prizes scrolled by - TV's, picnic hampers, champagne etc - and any that the contestant could remember they got to keep. But they never could remember all of them because they're not contextually related. Anyway I believe that context limitation is effected by the level of inhibition in the binding layers which is to say that only a certain number of binding neurons can be on in the layers at any given time.
So I think the feeling of belief is a measure of the activation of a whole context that's in mental focus. The biggest context that's active in the mind is the present... all the sensory data of the present. So from this state that you always return to, what does it take to activate another mental context to the extent that you believe it is real? That is to say, how do you literally make-believe? Real life is the stronger context... you are bombarded with 'truth' from your senses... and while active it will contradict anything you try to imagine as being real. So in other words if I try to imagine eating an apple so much so that I believe I am eating an apple, my reality context is stopping that from happening by essentially saying 'wtf are you talking about?, you're not eating an apple and here's the sensory data from your taste buds to prove it'


So there's me just gone off on probably a tangent as usual



