Posts: 9147
Threads: 83
Joined: May 22, 2013
Reputation:
46
RE: Seeing red
February 3, 2016 at 8:58 am
(February 3, 2016 at 8:55 am)Rhythm Wrote: Hey, for a darker twist. How much is hard observation of the brain/mind worth to us? How deep down the ethical rabbit hole would we go?
Let me introduce you to HSD (Hallervorden-Spatz Disease). Now called NBIA-1(neurodegeneration with brain iron accumulation type 1). A neurological condition described in 1922, by Hallervorden and Spatz. Brilliant neurologists. Go read up on it if you want to be terrified. Relentlessly progressive dementia with an equally shitty full body experience attached.
They would later lend their expertise (and hone it) in nazi germany assisting with the eugenics program and performing human experimentation. A faustian bargain?
I think this question alone is sufficient to justify the continued existence and practice of philosophy.
Posts: 10331
Threads: 31
Joined: April 3, 2015
Reputation:
64
RE: Seeing red
February 3, 2016 at 9:00 am
(February 3, 2016 at 8:08 am)Rhythm Wrote: 1. It's a universal gate. Pile them together and you can realize -any gate- in their aggregate...and thus any function of any comp system. This is an interesting run in for me...but ask yourself what kind of architecture would be well suited to the sorts of growth by repetition inherent to life? Redundant ones, ofc. A simple block that can be arranged in every way the system might require. Nothing fancy, no top down design work to do. Just repeat ad infinitum (or until you run out of headspace). I think neurons are the biological equivalent of universal gates architecturally - but in terms of processing power they're probably more like full alus+......
Quote:From the point of view of digital electronics, functional completeness means that every possible logic gate can be realized as a network of gates of the types prescribed by the set. In particular, all logic gates can be assembled from either only binary NAND gates, or only binary .
https://en.wikipedia.org/wiki/Functional_completeness
2. Some gates can accept as many inputs as you like hypothetically (some gates are defined by the numbers of inputs and outputs though), and as many as you can cram in the space, practically (the same is true for outputs). The number of inputs though, can affect the robustness of the system in either way depending on what it's being tasked to do. For example...if you want C to be red apple....and you have A red and B apple. If you add a third input, X water. Your gate will fail to yield C red apple, even if it yields red and apple...because it did not yield water as well. The number of outputs doesn't have that effect, though, and so those are essentially "free" in the context of the red apple problem as it relates to a three input AND gate. You'd want as many outputs as possible to distribute the state to as many parts of the system as might be useful. Bussing, basically......bussing from hell. Full interconnectivity would be the ideal towards which you'd strive..but ultimately, fail at our scale of manufacture. The scale of manufacture that goes into neurons..however, is much finer.
Cool... I understand This will be fun to try and model a neuron this way and to see what an equivalent basic repeatable block in computing terms would look like... so that machines could 'evolve'... but ultimately the kind of connectivity in the brain is something I - and you I think - don't think would ever be feasible in practice... not just bussing from hell but with the Devil's blessing. Because in the brain, there's a certain amount of plasticity, and axons and dendrites can grow, essentially changing connectivity during the life of the system and adding inputs and outputs to a 'gate'. And synaptic learning as well having a similar variable effect on the inputs of a gate. But still, it will be fun to see if at least a neuron could be modelled, even roughly
Posts: 67212
Threads: 140
Joined: June 28, 2011
Reputation:
162
RE: Seeing red
February 3, 2016 at 9:06 am
(This post was last modified: February 3, 2016 at 9:09 am by The Grand Nudger.)
Here's something you might like Emjay. MUX. You'll be needing one (or a few billion, lol). The image alone will explain why.
Quote:In electronics, a multiplexer (or mux) is a device that selects one of several analog or input signals and forwards the selected input into a single line.
https://en.wikipedia.org/wiki/Multiplexer
I am the Infantry. I am my country’s strength in war, her deterrent in peace. I am the heart of the fight… wherever, whenever. I carry America’s faith and honor against her enemies. I am the Queen of Battle. I am what my country expects me to be, the best trained Soldier in the world. In the race for victory, I am swift, determined, and courageous, armed with a fierce will to win. Never will I fail my country’s trust. Always I fight on…through the foe, to the objective, to triumph overall. If necessary, I will fight to my death. By my steadfast courage, I have won more than 200 years of freedom. I yield not to weakness, to hunger, to cowardice, to fatigue, to superior odds, For I am mentally tough, physically strong, and morally straight. I forsake not, my country, my mission, my comrades, my sacred duty. I am relentless. I am always there, now and forever. I AM THE INFANTRY! FOLLOW ME!
Posts: 10331
Threads: 31
Joined: April 3, 2015
Reputation:
64
RE: Seeing red
February 3, 2016 at 9:14 am
(February 3, 2016 at 8:57 am)bennyboy Wrote: Fair enough. My point is really about a kind of butterfly effect. In a sufficiently complex system, the complexities of classical mechanics will lead to unpredictability. So when we start talking about states and binary decisions, and leave out the chaos (especially as a function of time), I'm not sure whether we are still including the elements essential to the system.
That might be true in a computer system... if one component fails it might bring down the whole thing... but in the brain, redundancy is 'built in' so where I talk about single neurons they're actually populations of neurons, having the effect of averaging out 'noise'. So there's never going to be a place where a single neuron's failure will catastrophically affect the whole network, but rather the signal strength will just weaken with progressive damage to a population of neurons.
Posts: 67212
Threads: 140
Joined: June 28, 2011
Reputation:
162
RE: Seeing red
February 3, 2016 at 9:18 am
(This post was last modified: February 3, 2016 at 9:21 am by The Grand Nudger.)
There's chaos aplenty in those states and systems. Minor defects to full on malfunctions and even the micro climate in the few millimeters surrounding the circuits all cause a dazzling array of hilarious errors. We design our machines to run over these, for the most part. As you lose portions of the system to physical damage you lose range of function from those portions. Think of losing a neuron, for example, as losing one of your computers cores. A neuron isn't just a bit of data, it's got significantly more going on than that.
I am the Infantry. I am my country’s strength in war, her deterrent in peace. I am the heart of the fight… wherever, whenever. I carry America’s faith and honor against her enemies. I am the Queen of Battle. I am what my country expects me to be, the best trained Soldier in the world. In the race for victory, I am swift, determined, and courageous, armed with a fierce will to win. Never will I fail my country’s trust. Always I fight on…through the foe, to the objective, to triumph overall. If necessary, I will fight to my death. By my steadfast courage, I have won more than 200 years of freedom. I yield not to weakness, to hunger, to cowardice, to fatigue, to superior odds, For I am mentally tough, physically strong, and morally straight. I forsake not, my country, my mission, my comrades, my sacred duty. I am relentless. I am always there, now and forever. I AM THE INFANTRY! FOLLOW ME!
Posts: 10331
Threads: 31
Joined: April 3, 2015
Reputation:
64
RE: Seeing red
February 3, 2016 at 9:19 am
(February 3, 2016 at 9:06 am)Rhythm Wrote: Here's something you might like Emjay. MUX. You'll be needing one (or a few billion, lol). The image alone will explain why.
Quote:In electronics, a multiplexer (or mux) is a device that selects one of several analog or input signals and forwards the selected input into a single line.
https://en.wikipedia.org/wiki/Multiplexer
Ooh, this is gonna be cool... I'm really looking forward to this
Posts: 10331
Threads: 31
Joined: April 3, 2015
Reputation:
64
RE: Seeing red
February 3, 2016 at 11:39 am
(This post was last modified: February 3, 2016 at 11:42 am by emjay.)
(February 3, 2016 at 2:23 am)Jörmungandr Wrote: (February 2, 2016 at 9:50 am)Emjay Wrote: This is how I see it; just as neural visual processing happens in layers, I think visual perception also happens in layers. A way to visualise it would be as transparencies laid on top of each other. The bottom one, the input layer - call it L1 (and note these L's here are just to demonstrate a point and bear no relation to the actual structure of the visual cortex) - would just be a photograph of a visual scene. Then above that would be L2, a layer mapping and representing colour information. L1 would be said to 'project' to L2. But as a transparency, this layer placed on top of L1 would look exactly the same... they would be seamlessly integrated perceptually because L2 would be extracting one property from the raw data in L1. Then say you've got L3 mapping lines. L1 would project to L3 but L2 wouldn't so diagrammed hierarchically L2 and L3 would be on level 2 and L1 would on level 1. Again, L3 as a transparency placed on top of the other two would be seamlessly integrated. Then on top of this you have the output layer - L4. All of these layers would be interconnected bidirectionally so that allows for both bottom-up and top-down activation and pattern completion.
So if you look at dreaming, the input layer, L1, is essentially turned off because your eyes are closed and you are not receiving visual input. Yet you can still dream vivid visual dreams. That makes sense if layers L2 and L3 are activated from the top-down by L4. The perception, having the bottom transparency removed, still captures the general structure of the photograph but loses the fine-grained detail of the raw data. For the sake of this, L4 can be considered the focus layer in that it is a map of the visual field just like L1 except that in receiving projections from L2 and L3 it is used to associate those object features. So by activating a neuron in L4 it would bidirectionally - and bidirectional connectivity is a prevalent feature of the visual cortex and most of the cerebral cortex - activate the associated neurons in L2 and L3 or bias them for easier activation from L1... that is to say, if the threshold value for firing a neuron is say 50 then bidirectional input reduces that effective threshold, so that say a value of 40 from L1 would be push it over threshold. That is neural bias. Anyway, the focus layer would receive input from whatever drives focus in the system... so that would be the feedback loop you talk about between environment and motor output.
Now if you take the question of imagination (and memory), I think this theory offers a good explanation. I think imagination is when there is a mismatch between the layers L1 to L4. That is to say if L2 and L3 are activated to simply extract the features of L1 then perceptually they seamlessly integrate because of the transparency effect I've described. But if a different set of neurons was activated in L2 and L3 - which could well happen not just because of top-down bidirectional input from the focus layer but also from any other areas of the brain that project to any of these layers - say a green pixel where the underlying data represents a red pixel then there would be a vague show-through effect and the greater the 'erroneous' activation of the L2 and L3 neurons, the more their transparencies would interfere with the perception of L1. So imagination starts off vague... just a kind of ghostly outline/sense superimposed on the visual field... but as it grows stronger it becomes more and more vivid. And this also I think could explain what I mentioned earlier in that you lose visual awareness when you get lost in thought; there would come a point when the interference from the erroneous L2 and L3 activations would essentially block L1 from having any say in the activations in L2 and L3, and thus visual perception would now fully reflect the context activated from the top-down... for the duration of this, until you snapped out of it, L1 would essentially be 'talking to the hand'.
And I think this transparency/interference principle would apply equally well to the other sensory modalities and their equivalent transparencies. And the integration of it all into a unified whole would still reflect the same principles, just at a much more complex level of interconnectivity. And whatever's 'in focus' in consciousness at a given time would reflect where the activation is concentrated in these layers, in the constant interplay of top-down, bottom-up, and lateral connectivity and influence. In other words focus to me is a passive thing... it follows where and how the network settles and reflects it.
Thank you for describing your model to me. That does make a lot of sense. I can't help but feel though that it is missing a command center where the output of these stages is registered. Maybe if I had more experience with neural nets, but maybe not.
I think the connectivity of the network would account for everything - ie no need whatsoever for 'software' in the brain just neural network dynamics playing out according to specific patterns of connectivity and the feedback provided by the living organism interacting with the environment. But if that connectivity includes binding neurons for the different perceptual systems, who knows. That could be what the claustrum was for the consciousness switch we talked about earlier, but I don't know. I think it would be natural for such representations to form, given the right connectivity, and perhaps such connectivity is preserved through DNA to create such structure in future generations, but I wouldn't so much think it would be a command centre as an association centre. So it could be that the different perceptions are coordinated from a specific brain area, but I think only in terms of associations and using the same neural network principles as everywhere else. But still with the same net result that whatever is actively represented in the network is also mirrored in consciousness, not just in terms of what - ie the neural representations - but also the manner and timing of when it is perceived (regardless of what form that perception takes... what form the qualia takes). In other words ignoring the form of the qualia, the representational content we experience in consciousness appears and disappears, and qualitatively becomes more vivid or more vague, in exactly the same manner and timing as a would be expected of the equivalent representations in a neural network activating or deactivating according to the network dynamics. I can't ignore that so Occam's Razor to me says that qualia mirrors what is actively represented in the network at any given time.
Posts: 9147
Threads: 83
Joined: May 22, 2013
Reputation:
46
RE: Seeing red
February 3, 2016 at 7:36 pm
(February 3, 2016 at 9:14 am)Emjay Wrote: (February 3, 2016 at 8:57 am)bennyboy Wrote: Fair enough. My point is really about a kind of butterfly effect. In a sufficiently complex system, the complexities of classical mechanics will lead to unpredictability. So when we start talking about states and binary decisions, and leave out the chaos (especially as a function of time), I'm not sure whether we are still including the elements essential to the system.
That might be true in a computer system... if one component fails it might bring down the whole thing... but in the brain, redundancy is 'built in' so where I talk about single neurons they're actually populations of neurons, having the effect of averaging out 'noise'. So there's never going to be a place where a single neuron's failure will catastrophically affect the whole network, but rather the signal strength will just weaken with progressive damage to a population of neurons.
This is the argument in QM or in general with the "butterfly effect." The idea is that in very complex systems, even though things normally average out, there are so many events that SOMETIMES a tiny variation snowballs. It comes down to arguments about determinism, because we can never actually know whether the chaos could have turned out any other way than it has.
Consider whether a brain is really a binary device or whether it functions as an analogue device (I'd argue it has components of both, actually). Then consider the concept of constructive interference in waves, and the fact of rogue waves (aka the "perfect storm"). In binary systems, you'll never get a perfect storm. In GENERAL, the complexity of the ocean leads to relatively uniform waves over the surface, but due to very-small-chance statistical interactions, you sometimes get harmonics that lead to freak incidents.
So even if people disregard the butterfly effect in general, in any system with sufficient complexity, and given that it is not digital, you will sometimes end up with unpredictable results.
Posts: 9147
Threads: 83
Joined: May 22, 2013
Reputation:
46
RE: Seeing red
February 3, 2016 at 7:38 pm
(This post was last modified: February 3, 2016 at 8:00 pm by bennyboy.)
(February 3, 2016 at 9:18 am)Rhythm Wrote: There's chaos aplenty in those states and systems. Minor defects to full on malfunctions and even the micro climate in the few millimeters surrounding the circuits all cause a dazzling array of hilarious errors. We design our machines to run over these, for the most part. As you lose portions of the system to physical damage you lose range of function from those portions. Think of losing a neuron, for example, as losing one of your computers cores. A neuron isn't just a bit of data, it's got significantly more going on than that.
Living and teaching in Korea, I've talked to some electronic engineers, and they tell me that chips now have built-in error checking to account for quantum tunneling-- or at least that their architecture is arranged in a way to limit the effect, which is real at few-nanometer chip architecture. Believe that shit.
I have a question for both of you. Do you believe that if you reduce brain complexity by pulling neurons 1-by-1, the results will become more stable, or more variable?
Posts: 10331
Threads: 31
Joined: April 3, 2015
Reputation:
64
RE: Seeing red
February 4, 2016 at 8:00 am
(February 3, 2016 at 7:36 pm)bennyboy Wrote: (February 3, 2016 at 9:14 am)Emjay Wrote: That might be true in a computer system... if one component fails it might bring down the whole thing... but in the brain, redundancy is 'built in' so where I talk about single neurons they're actually populations of neurons, having the effect of averaging out 'noise'. So there's never going to be a place where a single neuron's failure will catastrophically affect the whole network, but rather the signal strength will just weaken with progressive damage to a population of neurons.
This is the argument in QM or in general with the "butterfly effect." The idea is that in very complex systems, even though things normally average out, there are so many events that SOMETIMES a tiny variation snowballs. It comes down to arguments about determinism, because we can never actually know whether the chaos could have turned out any other way than it has.
Consider whether a brain is really a binary device or whether it functions as an analogue device (I'd argue it has components of both, actually). Then consider the concept of constructive interference in waves, and the fact of rogue waves (aka the "perfect storm"). In binary systems, you'll never get a perfect storm. In GENERAL, the complexity of the ocean leads to relatively uniform waves over the surface, but due to very-small-chance statistical interactions, you sometimes get harmonics that lead to freak incidents.
So even if people disregard the butterfly effect in general, in any system with sufficient complexity, and given that it is not digital, you will sometimes end up with unpredictable results.
Yeah, I understand what you mean. Rational AKD was talking about all this stuff as well... QM effects in the brain... microtubules etc. I accept that there could be quantum effects in the brain, and following that discussion, that they could have much more influence than I ever gave them credit for (I assumed before that they had no real effect at the relatively macro scale of molecules and cells), and even perhaps being leveraged by the system in some way - in microtubules etc, adding yet another level of complexity and even perhaps connectivity. But even if quantum shit is an integral part of the system - and the jury's out on that for me but I'd think probably not - it doesn't really effect my ideas on determinism and the clockwork universe because whether random or determined 'I' still have no say, so looking backwards in time, any choice, behaviour or whatever that was the result of quantum activity still cannot be attributed to 'me' and was only one way - even if it could not be 'predicted' beforehand by the clockwork universe, 'afterhand' it's a fait accompli with no way of predicting (even theoretically) what else it could have been, if anything.
I'd agree that the brain is both binary and analogue, in that the outputs of a neuron are spikes of variable frequency (the analogue part) but they only fire after a certain threshold value has been reached (the binary part). As a general rule I don't actually think about the brain in terms of brain waves. I should do perhaps but I don't, so I don't really know what it means by saying 'slow wave sleep' etc. But I'll assume that what you mean is that for a given population of neurons they could all be spiking at exactly the same time and at the same frequency resulting in perfectly synchronised pulses of maximum activity (or the reverse... no activity)? And the question being what effect would that have on the system? It's an interesting question... what would it take to 'break' the neural network?
I don't know but the first thing that comes to mind is that neurons work by exchanging ions with the extracellular fluid, and therefore maintaining different concentrations inside and outside the cell. Electronics is not my strong suit so perhaps you guys can help me understand this better I'm referring to the book I recommended to you (CECN) to try and summarise the following... The firing of a neuron relies on three ions, Na+(Sodium), Cl-(Chloride), and K+(Potassium). And learning and other stuff at the synapses relies on Ca++(Calcium). The makeup of the extracellular fluid is similar to seawater with dissolved salt accounting for the Na+ and Cl-. The membrane potential is the difference between charge inside and outside the cell across the cell membrane (wall). The resting potential is the membrane potential when the neuron is at rest and not receiving any inputs. This resting potential is maintained by the sodium-potassium pump in the cell membrane that actively pumps Na+ out of the cell and a smaller amount of K+ into it, resulting in a negative membrane potential of -70mV when the neuron is at rest. The book says the sodium-potassium pump uses energy and can be likened to charging up the battery that runs the neuron. Since the neuron lets ions into or out of the cell either through pumps or channels, channels allow ions to move not just by electrical forces but also by diffusion, so these concentrations that are maintained take diffusion into account. The main ion involved in firing action potentials is Na+ and there are two primary ways it can get into the cell passively through channels (which it 'wants' to do because of the sodium-potassium pump actively pumping it out and creating an imbalance). One way happens at the synapse, when Na+ channels are opened by the binding of the neurotransmitter glutamate, and the other happens at voltage gated channels (which represent the threshold value of neurons) at the axon hillock where an action potential originates. In either case the result is the 'depolarisation' of the neuron as the membrane potential moves closer towards 0. At the axon hillock, the voltage gated channels that open allow Na+ to flood in and this increased excitation activates another bunch of channels that allow that inhibit the neuron and the result of this is a spike of excitation followed by inhibition, and where the inhibition (among other things) causes the membrane potential to overshoot the resting potential - ie go lower than -70mv - resulting in a 'refractory' period following a spike where it is unable to fire again until the membrane potential climbs back up to the threshold level again, and thus resulting in an effective fixed maximum rate that the neuron can fire spikes. Anyway, the axon is insulated in sections along its length with a 'myelin' sheath separated by relay stations called nodes of Ranvier, and they perform the same function as at the axon hillock, serving to re-amplify the degrading signal as it propagates up the axon. When it gets to the end of the axon and enters one of the axon terminals (also known as axon buttons) which is the pre-synaptic site it triggers voltage gated channels to allow Ca++ into the terminal (and from internal stores), which causes little sacks - called vesicles - of neurotransmitters to bind with the cell wall and release their contents into the synaptic cleft. By the way, the neurotransmitters themselves are produced in the soma and transported up to the axon terminals by microtubules, acting kinda like a pipeline, so that's one use for the little guys that doesn't rely on quantum shit Anyway once the neurotransmitter is in the synaptic cleft it diffuses across and binds with specific receptors poking out of the dendritic membrane on the post-synaptic neuron and depending on the type causes different things to happen in the receiving neurons, either opening channels, or causing chemical cascades of reactions. Finally, about the other two ions I haven't talked about, Cl- channels are used to inhibit the neuron and are triggered by the neurotransmitter GABA which is sent by inhibitory interneurons, and K+ constantly leaks out of neurons in small amounts through an always-open channel, there is a voltage gated channel that lets out K+ when a neuron gets very excited, and another type of channel K+ opens as a function of how much Ca++ is present in the neuron, with more indicating extended periods of activity, so K+ is used to inhibit overactive neurons and is involved in 'neuron fatigue'.
Anyway, the point of all that was to hopefully understand what could go wrong if a butterfly effect, snowball thingy happened So I was thinking, since the neurons rely on maintaining very specific potential differences and concentration gradients relative to the extra-cellular fluid then I think it would be fair to say that the content of the extracellular fluid must be regulated just as much as it is inside neurons. And for that the blood-brain barrier springs to mind, because it requires active transport of nutrients that it allows through the barrier (which is not everything... not toxins in the bloodstream for instance) via special transport molecules/cells, whatever they may be. So the question is if you've got an edge case where say all neurons are either in the resting state or in the fully excited state, what would be the situation in the extracellular fluid? Would there be enough ions to go around for a start to even allow that to happen. If all the neurons were in the resting state therefore with a greater concentration Na+ outside than in, how would that effect the balance for all neurons... could they even function properly given that if they all pumped out Na+ then presumably the concentration of Na+ in the extra-cellular fluid would skyrocket creating a very different resting potential? Conversely if all neurons were fully excited - ie having a membrane potential of 0, then what would be the effects? Any ideas from you two brainboxes? But one thing I think is for sure, is that a neuron has a maximum firing rate so even if it received massive amounts of excitation at the same time on its dentrites from different neurons all firing at maximum rate, they would open channels all over but it would still result in the same depolarisation, just coming in through more openings. That makes sense.
As for your question to both of us, about pulling neurons 1-by-1, I really don't know I'm sorry. I think a neural network will always find a way to represent whatever it can, depending on its connectivity, but with decreasing neurons and thus indirectly decreasing connectivity, the scope of the representations would reduce. But how to translate that into stable or variable I don't know... I think it would always be pretty stable whatever size it was but I don't really know what you mean.
|