RE: Is the statement "Claims demand evidence" always true?
January 14, 2017 at 11:32 pm
(This post was last modified: January 14, 2017 at 11:46 pm by emjay.)
I'll get on that reading properly tomorrow but I'll just leave this thought dump here in the meantime, and see if it help elucidate what I might mean by truth-in-context. Apologies in advance because I have to go neural again All the following is in my opinion:
A neural network allows a system to create a representational model of the stable features of the environment it finds itself in, constrained by and extracted from the sensory data it has access to. So what's 'in here' (the representational model... which we perceive and can still perceive in the absence of sensory data... ie when dreaming, remembering, or imagining) is entirely dependent on what's 'out there' (the environment, as accessed by the senses) for its very existence. So the fact that there's anything in here means there's something stable out there, such as gravity or physical objects... that can be reliably detected by our senses in combination.
So how does 'truth' fit into this? The model is the 'truth' about our environment in terms of what is stable (and limited by our sensory access to it)... so if it were the Matrix and it had this same setup it (the neural network) would statistically extract the same stable features of the world and our experience would be no different. But if it was a different matrix, one with very different stable things, if that were possible... and putting aside the fact that we have specialised sensory equipment evolutionarily 'designed' to detect what we need to reliably detect the stable features of this environment; just pretending we were a neural blank slate for the sake of argument... then our model and perception would be very different. But make no mistake, a neural network is a statistical modelling machine so it can only learn if there is something stable presented to it. So without something stable in an environment there could be no model, therefore if you accept neural networks as the basis for the mind, it implies an 'out there' with stable features.
So from the neural perspective there are two kinds of truth; one is what things are - what stable features of the environment are being represented - and the other is whether they are currently being detected, indicated by activation. Because neurons learn and detect at the same time... the synaptic changes we call learning only happen when a post-synaptic neuron is active, and only in relation to a pre-synaptic neuron that must also be active and providing input to it. So as an aside, in my opinion sleep and dreaming is all about consolidating learning... flooding the network with activation allows synapses to learn more rapidly... but in so doing it also activates the neurons, which then with my view of the equivalence of neural and phenomenal representations means phenomenally experiencing those activations in the form of dreams. Makes perfect sense.
This picture of truth is a complicated though by the differing 'quality' of representations. The physical world is detected and modelled by specialised circuits, such as the visual cortex... that can be consider hard-coded (by evolution) into its very structure (in the sense that it uses specialised neurons etc, with specialised sensitivities) to detect and represent colours, lines, shapes etc. So that part we can consider pretty reliable. But then you have the cerebral cortex... hugely enlarged in humans and with a structure designed for multimodal association... and its size allowing for an exponential increase in 'processing power' because of almost limitless capacity for abstraction (compared to other animals). The same type of learning occurs but the model cannot be considered as reliable because it is far more subjective and abstract (thoughts and ideas etc)... in other words whereas we can pretty much assume that everyone has the same representations of the physical world, we can't make the same assumption about their thoughts and memories... they are plastic and different in everyone. But still learnt and activated (neurally) in the same way. So that's why I have difficulty talking about truth, because from this perspective it only means either what is represented (which can be wrong... less so in the hard wired areas and more so in the association areas) and whether it's currently detected/activated (and to what degree... and in thought you can deliberately activate representations). You only need to play a game of Mafia (please do... sign ups are open /plug ) to realise that you can experience total certainty and still be completely wrong. So that feeling of certainty... of knowing... is no guide to the actual truth, all it means is activation.
I think that'll do for tonight. Hope this helps... rather than hinders in seeing where I'm coming from.
Actually this is probably not helpful, but it was fun to write so I'll leave it nighty night
A neural network allows a system to create a representational model of the stable features of the environment it finds itself in, constrained by and extracted from the sensory data it has access to. So what's 'in here' (the representational model... which we perceive and can still perceive in the absence of sensory data... ie when dreaming, remembering, or imagining) is entirely dependent on what's 'out there' (the environment, as accessed by the senses) for its very existence. So the fact that there's anything in here means there's something stable out there, such as gravity or physical objects... that can be reliably detected by our senses in combination.
So how does 'truth' fit into this? The model is the 'truth' about our environment in terms of what is stable (and limited by our sensory access to it)... so if it were the Matrix and it had this same setup it (the neural network) would statistically extract the same stable features of the world and our experience would be no different. But if it was a different matrix, one with very different stable things, if that were possible... and putting aside the fact that we have specialised sensory equipment evolutionarily 'designed' to detect what we need to reliably detect the stable features of this environment; just pretending we were a neural blank slate for the sake of argument... then our model and perception would be very different. But make no mistake, a neural network is a statistical modelling machine so it can only learn if there is something stable presented to it. So without something stable in an environment there could be no model, therefore if you accept neural networks as the basis for the mind, it implies an 'out there' with stable features.
So from the neural perspective there are two kinds of truth; one is what things are - what stable features of the environment are being represented - and the other is whether they are currently being detected, indicated by activation. Because neurons learn and detect at the same time... the synaptic changes we call learning only happen when a post-synaptic neuron is active, and only in relation to a pre-synaptic neuron that must also be active and providing input to it. So as an aside, in my opinion sleep and dreaming is all about consolidating learning... flooding the network with activation allows synapses to learn more rapidly... but in so doing it also activates the neurons, which then with my view of the equivalence of neural and phenomenal representations means phenomenally experiencing those activations in the form of dreams. Makes perfect sense.
This picture of truth is a complicated though by the differing 'quality' of representations. The physical world is detected and modelled by specialised circuits, such as the visual cortex... that can be consider hard-coded (by evolution) into its very structure (in the sense that it uses specialised neurons etc, with specialised sensitivities) to detect and represent colours, lines, shapes etc. So that part we can consider pretty reliable. But then you have the cerebral cortex... hugely enlarged in humans and with a structure designed for multimodal association... and its size allowing for an exponential increase in 'processing power' because of almost limitless capacity for abstraction (compared to other animals). The same type of learning occurs but the model cannot be considered as reliable because it is far more subjective and abstract (thoughts and ideas etc)... in other words whereas we can pretty much assume that everyone has the same representations of the physical world, we can't make the same assumption about their thoughts and memories... they are plastic and different in everyone. But still learnt and activated (neurally) in the same way. So that's why I have difficulty talking about truth, because from this perspective it only means either what is represented (which can be wrong... less so in the hard wired areas and more so in the association areas) and whether it's currently detected/activated (and to what degree... and in thought you can deliberately activate representations). You only need to play a game of Mafia (please do... sign ups are open /plug ) to realise that you can experience total certainty and still be completely wrong. So that feeling of certainty... of knowing... is no guide to the actual truth, all it means is activation.
I think that'll do for tonight. Hope this helps... rather than hinders in seeing where I'm coming from.
Actually this is probably not helpful, but it was fun to write so I'll leave it nighty night