Posts: 67189
Threads: 140
Joined: June 28, 2011
Reputation:
162
RE: Seeing red
February 4, 2016 at 9:21 am
(This post was last modified: February 4, 2016 at 10:10 am by The Grand Nudger.)
@Benny, not sure. Removing a component of a system doesn't have a standard effect. If I "removed" one of your pc's alu functions (which always reduces to a single gate transmission in one of your cpu cores) you -might- never notice. Or you might notice right away. It would depend upon how integral to the operation of the system the removed component was, or how often you leveraged the function. That's a dedicated function though, with no redundancy. Removing single neurons, as far as I know, doesn't remove dedicated functions (we're not sure they have a specific function they;re tasked too(plasticity)...though there's some critical point where you've removed enough neurons...say, the neurons of the motor cortex..... at which paralysis ensues.
What I don't know, relative to the question, is whether or not this is what you mean by stable and variable. Presumably, if you've removed all of the neurons of the motor cortex the system itself (ignoring the "full body" system) is going to be pretty stable (even though you can't move). If there's some amount of them left trying to coordinate with nobody, as it were, you can imagine a situation in which jerky and unpredicatble motion is both expressed by the subject and reflected in constant "errors" in the cortex. You'll see this with the condition I linked back there with the nazis..due to iron buildup in the brain which causes neurons to malfunction leading to parkinsons like symptoms that get progressively worse. You might consider this chemical removal of nuerons leading to instability in both the system and in loss of function (or control of function) on the whole.
@ emjay, If brain is comp...then no, if all components of a comp system yield the same state simultaneously that's critical failure. It can no longer comp (and it can't reboot -itself- as that would be a signal change as well). If you were designing a board with this problem in mind, for whatever reason, say..in this case, you were going to use it around alot of ambient power discharges that might interfere with the circuit.....what you -could do- is run an output from some (or all, theoretically) gates in process and if, and only if all gates are on simultaneously, that trips a nand wired directly to the PS, shutting the system down (ofc you'll have just engineered a single point failure criteria for your whole system..if that nand malfunctions so too does the entire board). You'll actually find error checking arrays like this in your home PC, but you'll find even more exotic arrays in boards meant for exotic purposes. I know that probably doesn't answer your question outright, but it gives you a little more context for the question. That's a mechanical problem which does exist in comp systems, for which a solution exists...but.....it's the kind of solution which invokes near instant suspicion in the case of an evolved computer. Thankfully, we don't seem to be subject to this problem. I picked up on something earlier, that I wanted to comment upon. There's an upper limit to how fast neurons can fire, and there's also an upper limit to how quickly those signals can move as well. Presumably, there's time for error checking in transit, and there must be some sort of frontloading or backloading of information when the amount of work being done exceeds the ability of the system to transmit the data as a packet to the relevant centers. This is the sort of thing that answers a question benny had awhile back as well..."what do I gain from considering mind material or considering mind comp" - well, you gain insight into how existing comp systems work around problems we would perceive or describe as cognitive problems. The great and wonderful "well, you could try to solve for x like so". That gives us a hell of a working start to exploring how the brain does it. We can say.."were going to engineer "problem x" in the brain, and then see if it's doing anything that resembles this solution in response to that". For example, present each eye with a different image isolated from the other. What do we see the brain doing, how does it resolve conflicting inputs? I mention this because, mechanically, your situation above where all neurons are in the same state, is the penultimate expression of the conflicting inputs problem.
Also, in your convo with Jorg you mentioned that you didn;t see the need for programming, that the hardware would handle and explain all function. That's always the case..even when there -is- programming. Programming is just a set of instructions for the hardware to be in a certain state. It's -always- about the hardware..your nn weighting process...is writing a program, constantly. That's the point at which "classical" comp mind fell flat on it's face and things like NN and hueristics took over as practical descriptions.
I am the Infantry. I am my country’s strength in war, her deterrent in peace. I am the heart of the fight… wherever, whenever. I carry America’s faith and honor against her enemies. I am the Queen of Battle. I am what my country expects me to be, the best trained Soldier in the world. In the race for victory, I am swift, determined, and courageous, armed with a fierce will to win. Never will I fail my country’s trust. Always I fight on…through the foe, to the objective, to triumph overall. If necessary, I will fight to my death. By my steadfast courage, I have won more than 200 years of freedom. I yield not to weakness, to hunger, to cowardice, to fatigue, to superior odds, For I am mentally tough, physically strong, and morally straight. I forsake not, my country, my mission, my comrades, my sacred duty. I am relentless. I am always there, now and forever. I AM THE INFANTRY! FOLLOW ME!
Posts: 10328
Threads: 31
Joined: April 3, 2015
Reputation:
64
RE: Seeing red
February 4, 2016 at 12:30 pm
(February 4, 2016 at 9:21 am)Rhythm Wrote: @emjay, If brain is comp...then no, if all components of a comp system yield the same state simultaneously that's critical failure. It can no longer comp (and it can't reboot -itself- as that would be a signal change as well). If you were designing a board with this problem in mind, for whatever reason, say..in this case, you were going to use it around alot of ambient power discharges that might interfere with the circuit.....what you -could do- is run an output from some (or all, theoretically) gates in process and if, and only if all gates are on simultaneously, that trips a nand wired directly to the PS, shutting the system down (ofc you'll have just engineered a single point failure criteria for your whole system..if that nand malfunctions so too does the entire board). You'll actually find error checking arrays like this in your home PC, but you'll find even more exotic arrays in boards meant for exotic purposes. I know that probably doesn't answer your question outright, but it gives you a little more context for the question. That's a mechanical problem which does exist in comp systems, for which a solution exists...but.....it's the kind of solution which invokes near instant suspicion in the case of an evolved computer. Thankfully, we don't seem to be subject to this problem. I picked up on something earlier, that I wanted to comment upon. There's an upper limit to how fast neurons can fire, and there's also an upper limit to how quickly those signals can move as well. Presumably, there's time for error checking in transit, and there must be some sort of frontloading or backloading of information when the amount of work being done exceeds the ability of the system to transmit the data as a packet to the relevant centers. This is the sort of thing that answers a question benny had awhile back as well..."what do I gain from considering mind material or considering mind comp" - well, you gain insight into how existing comp systems work around problems we would perceive or describe as cognitive problems. The great and wonderful "well, you could try to solve for x like so". That gives us a hell of a working start to exploring how the brain does it. We can say.."were going to engineer "problem x" in the brain, and then see if it's doing anything that resembles this solution in response to that". For example, present each eye with a different image isolated from the other. What do we see the brain doing, how does it resolve conflicting inputs? I mention this because, mechanically, your situation above where all neurons are in the same state, is the penultimate expression of the conflicting inputs problem.
Also, in your convo with Jorg you mentioned that you didn;t see the need for programming, that the hardware would handle and explain all function. That's always the case..even when there -is- programming. Programming is just a set of instructions for the hardware to be in a certain state. It's -always- about the hardware..your nn weighting process...is writing a program, constantly. That's the point at which "classical" comp mind fell flat on it's face and things like NN and hueristics took over as practical descriptions.
I'm not quite sure what you mean by 'frontloading' and 'backloading' but there are feedforward and feedback connections which might be close to what you're talking about. Say you've got four layers L1-4 connected bidirectionally with each other, like my earlier example but all on top of each other this time rather that two on a middle 'level', then if L1 not only projects to L2 but also projects to L4... but still say that L4 only projects back down to L3, not L1 as well. Then what would happen is L1 would activate L2 and L4 at the same time, and L4 would start sending feedback down to L3, biasing it, and the same from L3 down to L2, and thus the activation going up from L1 and coming down from L4 would meet in the middle. I don't know if that's what you mean by frontloading but it allows the network to act according to expectation.
Also, using chemical synapses slows down transmission, as does the opening and closing of all these channels, but there are other types of transmission (which I don't know that much about... sorry) which are purely electrical and don't use neurotransmitters, so if you're looking for fast connections that may be where to look
And yep, that's what I'm about as well... you do it with comp mind, I do it with neural networks... kind of reverse engineering the brain based on first principles, proposing a problem and a solution and seeing how well it matches up with what is observed.
When I was talking about the all-on/all-off problem I was primarily concerned with the effects on the extra-cellular fluid and whether it would even be able to function as a whole network. But if you're talking about that as a precursor to a conflicting inputs problem, just to say, just in case this was what you meant, that the brain is excellent at selecting between two equally valid patterns of input. Picture if you will a household radiator with its sticky-out bits... if you look at that it's an optical illusion where you can see either the sticky-out bits or the indented bits as being in front. There are better examples of this illusion I'm sure but this is the one that comes to mind right now. Anyway your mind flips between seeing it one way and seeing it the other and it has two ways of resolving the conflict. The first is inhibition, which in practice will force one 'winner' out of many neurons in a layer... shutting down others so that only one comes out on top... and all it needs is a slight fluctuation in activation for one of them to gain the upper hand. So say we've got all of our neurons on as per this edge case, then inhibition will still cause the network to settle into one state (where the neurons in the layer with inhibition in this case are binding neurons for a whole context below). If it really is tight and nothing can get the upper hand, that's where neuron fatigue comes in... a neuron can't fire indefinitely so when it turns off it can tip the balance.
As for the programming, yep I see what you mean. The programming in this case probably comes about not just from general neural network dynamics but also from the specific connectivity patterns... different connectivity patterns will create different network dynamics... as for instance is the case with the bidirectional connectivity of the cortex... that creates completely different (and really cool ) network dynamics than areas without that sort of connectivity.
Posts: 67189
Threads: 140
Joined: June 28, 2011
Reputation:
162
RE: Seeing red
February 4, 2016 at 2:51 pm
(This post was last modified: February 4, 2016 at 3:12 pm by The Grand Nudger.)
More musing over the mechanical problems between computational steps than the steps themselves, actually..particularly in the context of bennys question. The issue of signal size, speed, and buffering explains for us both how a system can accommodate chaos -and- some experiences or behaviors we're all familiar with. Like zoning out when we focus, or going to "lalaland".
Obviously we must have a way to deal with this as it's an overwhelmingly present issue, mechanically, and your comments speak to that effect...I propose frontloading as a way of overcoming the issue..but frontloading doesn't work in the context of the nn's you're describing. I propose frontloading architecture, specifically, not frontloading processes (which is what your NN's could do) because it means that it's a problem that exists, but doesn't matter to the architecture in question...it doesn't have to do anything in specific, or "learn" anything to overcome it.
If a system -does- actually have to respond to the problem (rather than engineering it;s solution at the level of hardware), the it's an inevitability that it's going to run into a situation where it's trying a 40 inch peg down a 20 inch hole, and in context...I think we call that a stroke, lol. We could say, well..it learns to build a process center for dealing with that, which is fine, as long as it learns that before it encounters the problem, that would work..until the inevitability of that process center encountering the same error -in itself-. That's alot of room for failure in a system that is going to be subject to alot more "chaos" out in the "real world" than it will in sim environment where some things are taken for granted and others are completed ignored for purposes of brevity.
Quote:As for the programming, yep I see what you mean. The programming in this case probably comes about not just from general neural network dynamics but also from the specific connectivity patterns... different connectivity patterns will create different network dynamics... as for instance is the case with the bidirectional connectivity of the cortex... that creates completely different (and really cool ) network dynamics than areas without that sort of connectivity.
-another interesting thing to note, is that whatever state the connected system is in is itself, by simple virtue of it's existence, is the default state of the machine language of the system, in the absence of other instructions it will always return to this state. The more connected it is the more it can do -without- higher level translation and instructions, without programming. For example, when you subject data to your alu it's actually performing all possible functions on that data by default..you're selecting which output you prefer. It's doing all that logical work without your input, you don't have to tell it to do that (that's how we sped up processors initially, made their default states encompass a wider range of work). That lessons the load on whatever processes you can imagine for it and even for a "learning system" it lightens the load of computation....it also gets around an interesting question as regards learning systems. If they express their functions by the system of weights you have...how did they learn to learn? Personally, I think that brains are a bastard amalgam between heuristics and dedicated functions..and I think they come that way standard, from birth. That the architecture itself supports a minimum level of operation (and I'm talking high level cognition here, not just keeping the heart beating) by virtue of -nothing- other than it's construction. No complex high level programming relationships or referential data sets saved from experience required.
What are your thoughts. How does the NN learn to learn, and do you think that we're nn blanks at birth (or to what extent would we or could we be?), or do you expect that like a pc...we're shipped with bundled utilities, lol? Or do you see us more as hard built to solve for x. A loaded gun waiting for the trigger to be pulled? Also, can you imagine a failure condition for an NN. Some scenario in which they are provided with the inputs but fail to comp? What would that be like, how could it happen?
I am the Infantry. I am my country’s strength in war, her deterrent in peace. I am the heart of the fight… wherever, whenever. I carry America’s faith and honor against her enemies. I am the Queen of Battle. I am what my country expects me to be, the best trained Soldier in the world. In the race for victory, I am swift, determined, and courageous, armed with a fierce will to win. Never will I fail my country’s trust. Always I fight on…through the foe, to the objective, to triumph overall. If necessary, I will fight to my death. By my steadfast courage, I have won more than 200 years of freedom. I yield not to weakness, to hunger, to cowardice, to fatigue, to superior odds, For I am mentally tough, physically strong, and morally straight. I forsake not, my country, my mission, my comrades, my sacred duty. I am relentless. I am always there, now and forever. I AM THE INFANTRY! FOLLOW ME!
Posts: 10328
Threads: 31
Joined: April 3, 2015
Reputation:
64
RE: Seeing red
February 4, 2016 at 4:36 pm
(February 4, 2016 at 2:51 pm)Rhythm Wrote: What are your thoughts. How does the NN learn to learn, and do you think that we're nn blanks at birth (or to what extent would we or could we be?), or do you expect that like a pc...we're shipped with bundled utilities, lol? Or do you see us more as hard built to solve for x. A loaded gun waiting for the trigger to be pulled? Also, can you imagine a failure condition for an NN. Some scenario in which they are provided with the inputs but fail to comp? What would that be like, how could it happen?
I'll just reply to this now for the moment if that's okay and the rest later. I'm already late for a mafia game that has just started so I might not be around as much for a few days. Sorry about that
My book discusses this question. It argues that it would not be feasible for the genome to encode for specific representations in the brain... ie specific patterns of weights... and therefore that what is more likely is that it encodes for structural areas, specific types of connectivity, specific types of neurons, amounts of inhibition in areas etc. So in other words it gives us the structure, which in turn biases the types of dynamics it will produce and the types of learning that will occur. Like what I said above... the connectivity makes all the difference. So to the question of whether we start as blank slates, the answer is yes and no... we are most likely built with the network architecture in place but not the content. As to how the network learns to learn it's a self-organising network... learning happens at the level of individual synapses with no overseeing required... just as a function of how active the pre-synaptic neuron is compared to the post-synaptic neuron (as I explained in my post to benny about association). Every time a neuron fires it's learning. But at a variable rate of change, with some specialist areas learning much faster - for episodic memory etc - than the general slow rate used to model the world. So presented with the same environment, the weights will come to represent it no matter how random they initially start. But some parts of the brain are clearly structurally evolved for a particular purpose... the visual cortex for instance contains lots of different types of specialist neurons, in special arrangements, and 'tuned' in very specific ways... such as the line detectors etc that detect lines in different orientations so it may be the case that these areas are essentially hard-wired by evolution and are not actually about learning at all, with the learning occurring later down the line in association areas, which is pretty much the entire cerebral cortex. So I'm sorry about that, my problem is that I generalise neural networks too much... there are many different types of neurons and it's not necessarily the case that all of them learn. The type of neural network I'm most familiar with is the cortex, where associative learning happens along with bidirectionality, contexts, stereotypes, bias and all the rest... all the cool stuff associated with cognition.
Posts: 67189
Threads: 140
Joined: June 28, 2011
Reputation:
162
RE: Seeing red
February 4, 2016 at 4:44 pm
(This post was last modified: February 4, 2016 at 4:47 pm by The Grand Nudger.)
So, fair to say that limited to just the learning bit, the core of cognition, as it were...you think that it's a loaded gun, the "first shot" giving the context and content that will build a more complex (and hopefully representative) system, a full(er) cognitive view, through self organization?
What are the weaknesses of the type of nn you're most familiar with, and what are it's practical dependencies?
(HF /w mafia, see you when you get back)
I am the Infantry. I am my country’s strength in war, her deterrent in peace. I am the heart of the fight… wherever, whenever. I carry America’s faith and honor against her enemies. I am the Queen of Battle. I am what my country expects me to be, the best trained Soldier in the world. In the race for victory, I am swift, determined, and courageous, armed with a fierce will to win. Never will I fail my country’s trust. Always I fight on…through the foe, to the objective, to triumph overall. If necessary, I will fight to my death. By my steadfast courage, I have won more than 200 years of freedom. I yield not to weakness, to hunger, to cowardice, to fatigue, to superior odds, For I am mentally tough, physically strong, and morally straight. I forsake not, my country, my mission, my comrades, my sacred duty. I am relentless. I am always there, now and forever. I AM THE INFANTRY! FOLLOW ME!
Posts: 10328
Threads: 31
Joined: April 3, 2015
Reputation:
64
RE: Seeing red
February 4, 2016 at 6:48 pm
(February 4, 2016 at 4:44 pm)Rhythm Wrote: So, fair to say that limited to just the learning bit, the core of cognition, as it were...you think that it's a loaded gun, the "first shot" giving the context and content that will build a more complex (and hopefully representative) system, a full(er) cognitive view, through self organization?
What are the weaknesses of the type of nn you're most familiar with, and what are it's practical dependencies?
(HF /w mafia, see you when you get back)
I guess that's what I'm saying It will find the statistical regularities in the environment it's presented with... so if there is a stable environment out there it will produce stable representations. I think it will have no problem modelling the physical regularities of the world... the things we see, hear etc, because that's stable for everyone... we both see the same shapes etc... but where abstract thoughts are concerned - because this is all about abstraction... a limitless hierarchy of associations... that's what's so cool about it - I think that's where the real individuality lies because it's modelling things that get further and further away from the source. In other words it will learn in the same associative way regardless of the level of abstraction of the input. And that's how you can come to associate anything with anything. So the model of the outside world I would expect to be roughly the same across individuals but at the level of ideas I'd expect it to be much more variable.
To be honest I can't think of any weaknesses - it's beautiful - though I'll try It's geared towards generalisation and categorisation so things like bias and stereotyping come perfectly naturally to it - as a result of the bidirectional connectivity - but these things tend to do more harm than good in the world these days, especially when combined with emotion. And actually figuring out how it associates emotion with it is part of the fun... how certain contexts are activated depending on your needs so if you're hungry food comes to mind and you'll probably see food in an inkblot... that is to say your perception is biased towards looking for food. I talk about individual contexts and that's one thing... any related set of associations is a context... but life itself is like one big ever changing context bound together by your environment as you progress through life; you return to your senses - your actual environment - after thinking and everything in it has associations. So it really can be a case of 'out of sight out of mind'. I said about boot strapping before. A context can be refreshed with very little input... so say I'm doing a programming project... that context... everything I learn relating to that stays active or at least 'primed' in my mind because once a stable context exists it stays active for quite some time and biases the network such that it will be easy to 'refresh' with very little input... that little input will cascade through the context according to the principles of bidirectional feedback and bring it back up to full strength. So in my programming project if I leave the computer and come back to it later I still remember what to do etc. But if I come back to a programming project after a long time, when the context is no longer active, I have to activate the context from scratch if I want access to the same assumptions I had before. Which is a lot harder because not only is the bias no longer active, I've also forgotten how to trigger it, so have to poke around its edges until I can activate it again... but I can activate it again... it's all there waiting to be reactivated, if you can only find the way in. It's amazing so I really can't think of any weaknesses
As to practical dependencies, I'm afraid I don't know what you mean?
Thank you I hope you play again sometime... I do the stats and I've seen that you've played one game... it would be really nice to see you down there
Posts: 9147
Threads: 83
Joined: May 22, 2013
Reputation:
46
RE: Seeing red
February 4, 2016 at 8:20 pm
(February 4, 2016 at 8:00 am)Emjay Wrote: And the question being what effect would that have on the system? It's an interesting question... what would it take to 'break' the neural network? I wouldn't use the term broken. Interference might break a digital system (it might not anyway), but the brain is what it is, and I wouldn't say that rare events would generally mean a "breaking." I can imagine (and this is just speculation) that you could see epileptics as systems that are sensitive to constructive interference (i.e. harmonics), and that things like spiritual experiences might be, as well.
Quote:Anyway, the point of all that was to hopefully understand what could go wrong if a butterfly effect, snowball thingy happened So I was thinking, since the neurons rely on maintaining very specific potential differences and concentration gradients relative to the extra-cellular fluid then I think it would be fair to say that the content of the extracellular fluid must be regulated just as much as it is inside neurons. And for that the blood-brain barrier springs to mind, because it requires active transport of nutrients that it allows through the barrier (which is not everything... not toxins in the bloodstream for instance) via special transport molecules/cells, whatever they may be. So the question is if you've got an edge case where say all neurons are either in the resting state or in the fully excited state, what would be the situation in the extracellular fluid?
That would be like having every start in the galaxy line up, I suppose.
Quote:As for your question to both of us, about pulling neurons 1-by-1, I really don't know I'm sorry. I think a neural network will always find a way to represent whatever it can, depending on its connectivity, but with decreasing neurons and thus indirectly decreasing connectivity, the scope of the representations would reduce. But how to translate that into stable or variable I don't know... I think it would always be pretty stable whatever size it was but I don't really know what you mean.
I suppose my question is whether adding more members adds a statistical balancing force that will never be disrupted (like the QM particles in my table never "spiking" and causing it to light on fire or something), or whether the increased complexity adds to the chance of a rogue wave-type situation where SOMETIMES remarkable things will happen as a gazillion discrete events just happen to line up.
Posts: 9147
Threads: 83
Joined: May 22, 2013
Reputation:
46
RE: Seeing red
February 4, 2016 at 8:27 pm
(This post was last modified: February 4, 2016 at 8:30 pm by bennyboy.)
(February 4, 2016 at 9:21 am)Rhythm Wrote: What I don't know, relative to the question, is whether or not this is what you mean by stable and variable. Presumably, if you've removed all of the neurons of the motor cortex the system itself (ignoring the "full body" system) is going to be pretty stable (even though you can't move). If there's some amount of them left trying to coordinate with nobody, as it were, you can imagine a situation in which jerky and unpredicatble motion is both expressed by the subject and reflected in constant "errors" in the cortex. You'll see this with the condition I linked back there with the nazis..due to iron buildup in the brain which causes neurons to malfunction leading to parkinsons like symptoms that get progressively worse. You might consider this chemical removal of nuerons leading to instability in both the system and in loss of function (or control of function) on the whole. Read my response to Emjay.
I'd say that if the brain represents a chaotic system whose job is to perform digital functions, then as you pulled neurons you'd lose "resolution," leading at some point to so much noise in the system that the digital functions could no longer be reliably performed, resulting in some total system failure at a critical mass.
If the brain cannot be represented by logic gates, however, then I'd expect a gradual decrease in function from Einstein down to earthworm-- like a dimmer switch rather than an on/off switch, as chaos should be (at least it seems to me in this moment of pure speculation) fractal.
This is important for your model of ideas, because the digital function of the brain might reasonably be simulated by any physical structure (say a computer), but a chaotic or analog function might not be reproducible in any manner without an actual brain. Alternately, ANY sufficiently chaotic AI system might be intrinsically conscious, if it is in fact that chaos in which the ghost in the gears resides.
Posts: 67189
Threads: 140
Joined: June 28, 2011
Reputation:
162
RE: Seeing red
February 4, 2016 at 9:50 pm
(This post was last modified: February 4, 2016 at 10:04 pm by The Grand Nudger.)
(February 4, 2016 at 8:27 pm)bennyboy Wrote: I'd say that if the brain represents a chaotic system whose job is to perform digital functions, then as you pulled neurons you'd lose "resolution," leading at some point to so much noise in the system that the digital functions could no longer be reliably performed, resulting in some total system failure at a critical mass. That's going to happen in an orderly system as well. We see it in digital devices.
Quote:If the brain cannot be represented by logic gates, however, then I'd expect a gradual decrease in function from Einstein down to earthworm-- like a dimmer switch rather than an on/off switch, as chaos should be (at least it seems to me in this moment of pure speculation) fractal.
That's also going to happen if it -can- be represented by logic gates. We see it in analog devices.
Quote:This is important for your model of ideas, because the digital function of the brain might reasonably be simulated by any physical structure (say a computer), but a chaotic or analog function might not be reproducible in any manner without an actual brain. Alternately, ANY sufficiently chaotic AI system might be intrinsically conscious, if it is in fact that chaos in which the ghost in the gears resides.
As a fellow LOL player I'm horrified that you failed to recall a little thing called RNG. If the ghost, as you put it, resides in the chaos, there's a circuit for that. Thing is, in comp mind, there is no "ghost" -at all-, not even poetically. No "presence" behind the eyes, just the the gears...you see what I mean? There's nothing residing "in there". There isn't even an "in there" in which to reside.
I am the Infantry. I am my country’s strength in war, her deterrent in peace. I am the heart of the fight… wherever, whenever. I carry America’s faith and honor against her enemies. I am the Queen of Battle. I am what my country expects me to be, the best trained Soldier in the world. In the race for victory, I am swift, determined, and courageous, armed with a fierce will to win. Never will I fail my country’s trust. Always I fight on…through the foe, to the objective, to triumph overall. If necessary, I will fight to my death. By my steadfast courage, I have won more than 200 years of freedom. I yield not to weakness, to hunger, to cowardice, to fatigue, to superior odds, For I am mentally tough, physically strong, and morally straight. I forsake not, my country, my mission, my comrades, my sacred duty. I am relentless. I am always there, now and forever. I AM THE INFANTRY! FOLLOW ME!
Posts: 9147
Threads: 83
Joined: May 22, 2013
Reputation:
46
RE: Seeing red
February 5, 2016 at 12:13 am
(February 4, 2016 at 9:50 pm)Rhythm Wrote: (February 4, 2016 at 8:27 pm)bennyboy Wrote: I'd say that if the brain represents a chaotic system whose job is to perform digital functions, then as you pulled neurons you'd lose "resolution," leading at some point to so much noise in the system that the digital functions could no longer be reliably performed, resulting in some total system failure at a critical mass. That's going to happen in an orderly system as well. We see it in digital devices. That's what I'm saying. You could see chaos in say the QM activity in an electronic logic gate, but that chaos is limited in such a way that it is irrelevant-- a gate is just a gate.
Quote:That's also going to happen if it -can- be represented by logic gates. We see it in analog devices.
Are you saying that logic gates are analog?
Quote:If the ghost, as you put it, resides in the chaos, there's a circuit for that.
Okay, this is the point around which we'll form an argument, I guess.
|