Our server costs ~$56 per month to run. Please consider donating or becoming a Patron to help keep the site running. Help us gain new members by following us on Twitter and liking our page on Facebook!
Current time: September 28, 2024, 6:21 pm

Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
We are no different than computers
RE: We are no different than computers
@Rhythm

Just a quick thought regarding whether humans can break their programming by virtue of volition...

I see it as a possibility that what is presented and in focus in consciousness represents the currently most active areas of the brain's neural networks. So if the neural representation of an object, which consists of several related aspects brought together through association, is particularly active you'll see that imagination in your mind's eye. Given that theory, I thought it was quite interesting what I read once about meditation; it was claimed that certain highly accomplished meditators could actually have control over subconscious processes like heart rate etc; things that were never meant to be under conscious control. I don't know whether I believe the claims but if they were true they could fit with this theory; the effect of meditation is to quiet the mind - to reduce neural activity - so if you were that accomplished so as to essentially bring it to a almost to a standstill, then perhaps formerly 'quiet' subconscious activity would seem 'loud' in comparison and thus enter consciousness thereby giving us even more volitional control to break our programming than nature intended. In other words this could be a back door into the system that nature never intended, precisely because it would be dangerous to have conscious control over such systems.

PS can you recommend any specific books on your 'comp mind' theory?
Reply
RE: We are no different than computers
Try the reference list of this article.  I've read a few on there, by no means all or most.  
http://plato.stanford.edu/entries/computational-mind/
(some of the best stuff on comp mind is actually the criticism of the position, btw!)

To your comment, I would respond that volition is not actually required in order to "break programming".  However, in order to discuss it at length we'd have to have a pretty good idea of what we were referring to when it came to that programming, and how it was broken, which we don't.  I'm leary of claims surrounding "special abilities", specifically in the context of this conversation, because the truth value of comp mind as a theory or explanation wouldn't..by necessity, lend any value to those claims themselves.  There are things which are not under a computational systems control regardless of it's sensory data relative to the item in question.  

Moving forward from that, though, nature intends nothing. Granted, if we could mess /w the factory defaults so easily, if that were an achievable trait, we'd expect it to express itself in a deleterious way.  If you can stop your heart on accident (or on purpose), by brute force of demographics...more people would have...by now. Proposing that it can be slowed (say, by some computational process), but only within limits (so as to avoid explaining why people haven't been stopping their hearts intentionally or accidentally so far as we can tell...as we would expect), opens up more questions and fails to resolve the larger question to which it refers. How?

That particular connection, to me, seems tenuous. I'll leave the meditative magic in the magic box for the time being (regardless of the value of ctm). Wink It would be, if it could be shown to be connected, a much better explanation for how meditation works than some nonsense about life energy etc...eh? I just don't know that our cognitive apparatus is at the helm there, or even could be - even if it would be capable. I can, however, see alot of reasons why a creature with that sort of ability would nix itself before we ever got around to asking the question of how it achieved that feat.
I am the Infantry. I am my country’s strength in war, her deterrent in peace. I am the heart of the fight… wherever, whenever. I carry America’s faith and honor against her enemies. I am the Queen of Battle. I am what my country expects me to be, the best trained Soldier in the world. In the race for victory, I am swift, determined, and courageous, armed with a fierce will to win. Never will I fail my country’s trust. Always I fight on…through the foe, to the objective, to triumph overall. If necessary, I will fight to my death. By my steadfast courage, I have won more than 200 years of freedom. I yield not to weakness, to hunger, to cowardice, to fatigue, to superior odds, For I am mentally tough, physically strong, and morally straight. I forsake not, my country, my mission, my comrades, my sacred duty. I am relentless. I am always there, now and forever. I AM THE INFANTRY! FOLLOW ME!
Reply
RE: We are no different than computers
(May 13, 2015 at 10:17 am)Rhythm Wrote: Try the reference list of this article.  I've read a few on there, by no means all or most.  
http://plato.stanford.edu/entries/computational-mind/
(some of the best stuff on comp mind is actually the criticism of the position, btw!)

To your comment, I would respond that volition is not actually required in order to "break programming".  However, in order to discuss it at length we'd have to have a pretty good idea of what we were referring to when it came to that programming, and how it was broken, which we don't.  I'm leary of claims surrounding "special abilities", specifically in the context of this conversation, because the truth value of comp mind as a theory or explanation wouldn't..by necessity, lend any value to those claims themselves.  There are things which are not under a computational systems control regardless of it's sensory data relative to the item in question.  

Moving forward from that, though, nature intends nothing.  Granted, if we could mess /w the factory defaults so easily, if that were an achievable trait, we'd expect it to express itself in a deleterious way.  If you can stop your heart on accident (or on purpose), by brute force of demographics...more people would have...by now.  Proposing that it can be slowed (say, by some computational process), but only within limits (so as to avoid explaining why people haven't been stopping their hearts intentionally or accidentally so far as we can tell...as we would expect), opens up more questions and fails to resolve the larger question to which it refers.  How?

That particular connection, to me, seems tenuous.  I'll leave the meditative magic in the magic box for the time being (regardless of the value of ctm). Wink  It would be, if it could be shown to be connected, a much better explanation for how meditation works than some nonsense about life energy etc...eh?  I just don't know that our cognitive apparatus is at the helm there, or even could be - even if it would be capable.  I can, however, see alot of reasons why a creature with that sort of ability would nix itself before we ever got around to asking the question of how it achieved that feat.

Thanks Rhythm for those references, I'll have some fun reading them Wink I've missed our talks about that from when I first joined the site; it certainly was an inspirational theory with a lot going for it.

What I said was just a thought, but not to be taken too seriously Wink Oddly enough the theory I was suggesting - the part about focus being on what was most active - is from a long time ago when I was trying to deduce what focus actually was, and in that one I came to the conclusion that there was no real volition involved: your neural networks go about doing what they do all the time 'showing' you the most active areas, with the self essentially being dragged along for the ride and leaving focus and volition as an illusion. After all it has already been shown that there is a gap between motor neurons kicking into action and the feeling of having willed the action, suggesting that it really is just an illusion. So in this case it wouldn't be real volition that broke any 'programming' but rather the case that the networks could in these exceptional circumstances settle into a rare state that in theory could result in the the self-destruction of the organism. I don't know if that makes any difference? I didn't quite understand your objection.

And yes it was a bad choice of words saying 'nature intended'. Evolution is like Microsoft when it comes to software: a million little upgrades and patches but never going back and redesigning the system from scratch  Wink
Reply
RE: We are no different than computers
My objection (minor, academic) was to the connection between ctm and claims about what meditation can or can't do.  A good example would be this.  I can build you a computer that cannot turn itself off or even control it's clock speed.  In order for a computer (and by extension a computational system) to "turn itself off or slow itself down", a specific implementation must be constructed/referenced.  Things cannot, simply by virtue of being computational systems, lay claim to that attribute (alteration of operating parameters, essentially).   It's something that comp sys -can do- or -might be doing-...but not because they are comp sys. Thus...comp sys does nothing to advance the claims regarding what meditation can achieve, or in what manner it -is- achieved.

(you know I'm pretty vocal on the boards about ctm, so I take care to distance ctm with ancillary claims that may borrow some credibility -from- ctm...sometimes it's legit, sometimes it isn't. In this case I don't think that there's enough to connect the two, but I don't take that any more seriously than you, just extended wonderings...there was a little in there that objected to the claim itself, that meditation allows control of specific body functions -which- on it's face seems like it might be a gross exaggeration of effect lacking in alot of other effects we would expect if it were true as told - not necessarily by you, but by the claimants,.....lol, my opinions could always use more organization.......)
I am the Infantry. I am my country’s strength in war, her deterrent in peace. I am the heart of the fight… wherever, whenever. I carry America’s faith and honor against her enemies. I am the Queen of Battle. I am what my country expects me to be, the best trained Soldier in the world. In the race for victory, I am swift, determined, and courageous, armed with a fierce will to win. Never will I fail my country’s trust. Always I fight on…through the foe, to the objective, to triumph overall. If necessary, I will fight to my death. By my steadfast courage, I have won more than 200 years of freedom. I yield not to weakness, to hunger, to cowardice, to fatigue, to superior odds, For I am mentally tough, physically strong, and morally straight. I forsake not, my country, my mission, my comrades, my sacred duty. I am relentless. I am always there, now and forever. I AM THE INFANTRY! FOLLOW ME!
Reply
RE: We are no different than computers
(May 13, 2015 at 11:27 am)Rhythm Wrote: My objection (minor, academic) was to the connection between ctm and claims about what meditation can or can't do.  A good example would be this.  I can build you a computer that cannot turn itself off or even control it's clock speed.  In order for a computer (and by extension a computational system) to "turn itself off or slow itself down", a specific implementation must be constructed/referenced.  Things cannot, simply by virtue of being computational systems, lay claim to that attribute (alteration of operating parameters, essentially).   It's something that comp sys -can do- or -might be doing-...but not because they are comp sys.  Thus...comp sys does nothing to advance the claims regarding what meditation can achieve, or in what manner it -is- achieved.

(you know I'm pretty vocal on the boards about ctm, so I take care to distance ctm with ancillary claims that may borrow some credibility -from- ctm...sometimes it's legit, sometimes it isn't.  In this case I don't think that there's enough to connect the two, but I don't take that any more seriously than you, just extended wonderings...there was a little in there that objected to the claim itself, that meditation allows control of specific body functions -which- on it's face seems like it might be a gross exaggeration of effect lacking in alot of other effects we would expect if it were true as told - not necessarily by you, but by the claimants,.....lol, my opinions could always use more organization.......)

Got yer. Very good example  Smile I didn't mean to imply that what I was musing had anything to do with ctm - my PS was just incidental because you had been talking about it in this thread and it reminded me how interesting it was. Sorry about that.
Reply
RE: We are no different than computers
Tell you what, going back to references and ways to find out more about CTM, best way...imo, is to model solutions to "mind" in an HDL.  Nothing overarching or groundbreaking, just little pieces here and there, modules you might say.  So, you might ask yourself how to manufacture a comp sys that could "experience red" - or return some effect descriptively identical to whatever you might mean by "experiencing red".   I would, by no means, say that modeling mind is easy, if it were, I wouldn't be pasting that model on -these- boards, eh? What you might find by doing, however...that you may not necessarily find by reading, is how simple -some- implementations are for things that produce effects we normally ascribe to mind.    Of course, it also allows you to recognize when a living creature -without- mind..presumably, is using some form of comp-for-effect.  My favorite is kin selection, because almost invariably we conceptualize kin selection as a product of some manner of mind, limited, perhaps..in dogs, for example, and sophisticated in ourselves.  Yet, we can watch sea thistle achieve the effect through comp, and demonstrably so.  That last bit is important (imo), because even if we, ourselves, were not producing these effects by means of comp mind......it shows us that they -can be-, and that we don't have to wonder about that..specifically - that it's not just computers ala microsoft that can compute, but also biological implementations which -do- exist. \Here, at least, we can see that certain structures of our own bodies "do comp" even if our minds arent comp...but at least it might whittle away all the effects not necessarrily related to mind, all the noise, if mind were some "else". It would say "okay, mind is "else" - but these things -long list- are comp.  I think the things I would put in that box are the things that you've described as illusions, regardless of whether there is another box, or a better explanation for mind.

In short, I think that ctm would explain much - even in failing to explain mind, and might even help further that better explanation -of mind- as it failed.....so, worth exploring, eh?

-I just like to keep these things going.  Wink  
I am the Infantry. I am my country’s strength in war, her deterrent in peace. I am the heart of the fight… wherever, whenever. I carry America’s faith and honor against her enemies. I am the Queen of Battle. I am what my country expects me to be, the best trained Soldier in the world. In the race for victory, I am swift, determined, and courageous, armed with a fierce will to win. Never will I fail my country’s trust. Always I fight on…through the foe, to the objective, to triumph overall. If necessary, I will fight to my death. By my steadfast courage, I have won more than 200 years of freedom. I yield not to weakness, to hunger, to cowardice, to fatigue, to superior odds, For I am mentally tough, physically strong, and morally straight. I forsake not, my country, my mission, my comrades, my sacred duty. I am relentless. I am always there, now and forever. I AM THE INFANTRY! FOLLOW ME!
Reply
RE: We are no different than computers
(May 13, 2015 at 11:58 am)Rhythm Wrote: Tell you what, going back to references and ways to find out more about CTM, best way...imo, is to model solutions to "mind" in an HDL.  Nothing overarching or groundbreaking, just little pieces here and there, modules you might say.  So, you might ask yourself how to manufacture a comp sys that could "experience red" - or return some effect descriptively identical to whatever you might mean by "experiencing red".   I would, by no means, say that modeling mind is easy, if it were, I wouldn't be pasting that model on -these- boards, eh? What you might find by doing, however...that you may not necessarily find by reading, is how simple -some- implementations are for things that produce effects we normally ascribe to mind.    Of course, it also allows you to recognize when a living creature -without- mind..presumably, is using some form of comp-for-effect.  My favorite is kin selection, because almost invariably we conceptualize kin selection as a product of some manner of mind, limited, perhaps..in dogs, for example, and sophisticated in ourselves.  Yet, we can watch sea thistle achieve the effect through comp, and demonstrably so.  That last bit is important (imo), because even if we, ourselves, were not producing these effects by means of comp mind......it shows us that they -can be-, and that we don't have to wonder about that..specifically - that it's not just computers ala microsoft that can compute, but also biological implementations which -do- exist.  \Here, at least, we can see that certain structures of our own bodies "do comp" even if our minds arent comp...but at least it might whittle away all the effects not necessarrily related to mind, all the noise, if mind were some "else".  It would say "okay, mind is "else" - but these things -long list- are comp.  I think the things I would put in that box are the things that you've described as illusions, regardless of whether there is another box, or a better explanation for mind.

In short, I think that ctm would explain much - even in failing to explain mind, and might even help further that better explanation -of mind- as it failed.....so, worth exploring, eh?  

-I just like to keep these things going.  Wink  

Okay I'm willing to try your experiment but what's an HDL? Sorry - I'm usually about five years behind everyone technology-wise. But for what it's worth I'm already sold on the notion that a computer can be made of any material - that, as this this thread asks, the brain is indeed a biological computer.

I haven't looked at computation in simplistic terms, as your experiment may or may not be suggesting to do - I'll find out when I do it, but I have looked a lot in neural network terms. One of my favourite books, and one that completely changed my whole view of the mind is called "Computational Explorations in Cognitive Neuroscience". It's aim is to create artificial neural networks that model biological neural networks, right down to modelling the flow of ions into and out of the cells. In it it demonstrates how many features of mind can be implemented with these neural networks. So I don't know whether this counts, but I am already convinced in the capacity of neural networks to perform many of the things we take for granted about the mind.

But as for modelling things like seeing red (a great book by the way, by Nicholas Humphrey) that could be a lot harder and I wouldn't know where to start?

Edited to add: Starting to read that link you posted, my modelling of brain features with neural networks, is not what you're talking about. I have to say that article looks a bit technical and over my head - what I really need is a CTM for Dummies book  Wink - but what it seems to be suggesting, and correct me if I'm wrong, is to essentially create an equation or algorithm with various mental states or whatever acting as the symbols in that equation. Is that what you are suggesting I do with the seeing red experiment?
Reply
RE: We are no different than computers
That is what the article suggests as the holy grail, as it were, of ctm..yes.  It's the route most commonly taken, and it shows the most promise in the way of supporting the theory.  

I'm suggesting something simpler, and probably not much different from what you're doing.  Create a machine capable of "seeing red" -however you describe it, or a machine that gets as close as you can manage*.  An HDL is a hardware description language.  It's used to design and test/sim circuits before they're built.  So it can show you how existent materials and structures can achieve an effect - rather than how an effect can be described mathematically.  I find that sort of explanation more compelling than all of the theories, algorithms, and theoretical math the world could possibly assemble - particularly in that the greater question being asked is how we, as organisms with structure and discernible systems...might achieve the effect we call mind - or whether or not any of our discernible structures and systems -can- achieve (or explain) mind. Think of it this way, a brilliant programmer can only make a pocket calculator do so much - and nature is not a brilliant programmer. The machine itself -must- be robust, if this even approximates an accurate explanation of our minds. Trying an HDL over algorithm puts the focus on the hardware, the system used and built for practical effect...and whatever our minds may be, or may be made of..they are at least providing that practical effect...rather than abstract algorithmic potential. I think...... I'm suggesting that you try something that you already do, using an unfamiliar tool. How do you model your NNs?

also, IDK if you need a ctm for dummies, because you probably have an understanding of computational architecture if you model NNs, CTM doesn;t propose anything novel to comp architecture(or anyone familiar with it), it proposes that known architecture -could- account for mind. That they are somewhat forced to use algorithm has at least a little bit to do with how difficult arranging a useful experiment would be on a human brain. If you don;t have diagnostics (which we don't) then the only way to learn how a comp actually works is to take it apart, preferably while it's running..to see what happens as you remove or add components..............this is a problem that plague every area of cognitive science, of course. Theres a big CTM/NN split, btw, NN definitely describes the architecture of the brain, so far as we can tell, much better than "classic" CTM. We aren't digital or analog computers with circuits built efficiently-to-task..that's for damned sure and no one can argue with the NN guys on that count. CTM proponents simply insist that the reliable functions of an NN lie in their ability to emulate a classical comp, or classical computational relationships......but, if you're looking for a dummies guide I'll suggest you start even lower, a "dummies guide" to computational architecture. NAND2Tetris.
http://www.nand2tetris.org/

*at the very least, doing so will help us to separate what is machine and what is mind, if we decide that the two are not interchangeable in this instance. We would have some metrics upon which to decide that this effect, or this part of the overall effect is "programming" or "other" after we've exhausted what the comp is capable of via machine language alone.
I am the Infantry. I am my country’s strength in war, her deterrent in peace. I am the heart of the fight… wherever, whenever. I carry America’s faith and honor against her enemies. I am the Queen of Battle. I am what my country expects me to be, the best trained Soldier in the world. In the race for victory, I am swift, determined, and courageous, armed with a fierce will to win. Never will I fail my country’s trust. Always I fight on…through the foe, to the objective, to triumph overall. If necessary, I will fight to my death. By my steadfast courage, I have won more than 200 years of freedom. I yield not to weakness, to hunger, to cowardice, to fatigue, to superior odds, For I am mentally tough, physically strong, and morally straight. I forsake not, my country, my mission, my comrades, my sacred duty. I am relentless. I am always there, now and forever. I AM THE INFANTRY! FOLLOW ME!
Reply
RE: We are no different than computers
(May 13, 2015 at 1:25 pm)Rhythm Wrote: That is what the article suggests as the holy grail, as it were, of ctm..yes.  It's the route most commonly taken, and it shows the most promise in the way of supporting the theory.  

I'm suggesting something simpler, and probably not much different from what you're doing.  Create a machine capable of "seeing red" -however you describe it, or a machine that gets as close as you can manage*.  An HDL is a hardware description language.  It's used to design and test/sim circuits before they're built.  So it can show you how existent materials and structures can achieve an effect - rather than how an effect can be described mathematically.  I find that sort of explanation more compelling than all of the theories, algorithms, and theoretical math the world could possibly assemble - particularly in that the greater question being asked is how we, as organisms with structure and discernible systems...might achieve the effect we call mind - or whether or not any of our discernible structures and systems -can- achieve (or explain) mind.   Think of it this way, a brilliant programmer can only make a pocket calculator do so much - and nature is not a brilliant programmer.  The machine itself -must- be robust, if this even approximates an accurate explanation of our minds.  Trying an HDL over algorithm puts the focus on the hardware, the system used and built for practical effect...and whatever our minds may be, or may be made of..they are at least providing that practical effect...rather than abstract algorithmic potential.   I think...... I'm suggesting that you try something that you already do, using an unfamiliar tool.  How do you model your NNs?

also, IDK if you need a ctm for dummies, because you probably have an understanding of computational architecture if you model NNs, CTM doesn;t propose anything novel to comp architecture(or anyone familiar with it), it proposes that known architecture -could- account for mind.  That they are somewhat forced to use algorithm has at least a little bit to do with how difficult arranging a useful experiment would be on a human brain.  If you don;t have diagnostics (which we don't) then the only way to learn how a comp actually works is to take it apart, preferably while it's running..to see what happens as you remove or add components..............this is a problem that plague every area of cognitive science, of course.  Theres a big CTM/NN split, btw, NN definitely describes the architecture of the brain, so far as we can tell, much better than "classic" CTM.  We aren't digital or analog computers with circuits built efficiently-to-task..that's for damned sure and no one can argue with the NN guys on that count.  CTM proponents simply insist that the reliable functions of an NN lie in their ability to emulate a classical comp, or classical computational relationships......but, if you're looking for a dummies guide I'll suggest you start even lower, a "dummies guide" to computational architecture.  NAND2Tetris.  
http://www.nand2tetris.org/

*at the very least, doing so will help us to separate what is machine and what is mind, if we decide that the two are not interchangeable in this instance.  We would have some metrics upon which to decide that this effect, or this part of the overall effect is "programming" or "other" after we've exhausted what the comp is capable of via machine language alone.

It has been a long time since I was really into neural networks, probably about five years, and it was never in any professional capacity, but just an interest/obsession. The book I mentioned came with, or linked to, some simulation software called Leabra/PDP++, which was later upgraded to something much better called Emergent. If you bear with me I'll try and find the links to it so you can try it out for yourself. That's what you'd use to model the NNs and failing that they are pretty simple to program, albeit without quite so many bells and whistles as that software.

How far have you managed to get with your own models?

Here is the link to the current version of the software:
http://grey.colorado.edu/emergent

You've spurred me on now to get back into it, if I can just get it to work in my distro of Linux (or in Wine) Wink Not likely unfortunately Sad
Reply
RE: We are no different than computers
My own models, lol..nothing at all approaching mind in it's totality or effectiveness.  Don't want you to get the wrong idea.  I model (and sometimes build) circuits largely with an aim to design better sensory for commercial ag - and no one's buying my little toys...they're all private use..lol.
(I'm butchering one of those parrot drones right now to haul an ir camera and plan paths that follow ir trace for watermanagement...and trying to sucker a buddy into programming a mobile app to handle the data, so as to replace the functionality lost when I destroy the native systems in order to jerry rig it.)

-IR can provide a means for detecting a whole range of problems in crops, from water to pests to disease. Imagine having a bunch of cheap, tiny, resource efficient drones that did all of our detection work for us real-time, and could feed that data to our smartphone? Trick is getting it to fly itself, eh?
I am the Infantry. I am my country’s strength in war, her deterrent in peace. I am the heart of the fight… wherever, whenever. I carry America’s faith and honor against her enemies. I am the Queen of Battle. I am what my country expects me to be, the best trained Soldier in the world. In the race for victory, I am swift, determined, and courageous, armed with a fierce will to win. Never will I fail my country’s trust. Always I fight on…through the foe, to the objective, to triumph overall. If necessary, I will fight to my death. By my steadfast courage, I have won more than 200 years of freedom. I yield not to weakness, to hunger, to cowardice, to fatigue, to superior odds, For I am mentally tough, physically strong, and morally straight. I forsake not, my country, my mission, my comrades, my sacred duty. I am relentless. I am always there, now and forever. I AM THE INFANTRY! FOLLOW ME!
Reply



Possibly Related Threads...
Thread Author Replies Views Last Post
  I have a hypothesis on how computers could gain sentience Won2blv 21 7080 March 26, 2017 at 8:08 am
Last Post: I_am_not_mafia
  Do computers solve the equations yet? watchamadoodle 23 5372 March 28, 2015 at 7:21 am
Last Post: bennyboy



Users browsing this thread: 1 Guest(s)