Our server costs ~$56 per month to run. Please consider donating or becoming a Patron to help keep the site running. Help us gain new members by following us on Twitter and liking our page on Facebook!
(January 29, 2015 at 12:37 am)Surgenator Wrote: Just thinking about the memory space needed for the mapping. Human brain has ~100 billion neurons with each neuron makes over 1000 connections. A mouse's brain is ~0.03% smaller in mass than a human brain.
If we assume the neurons between mouse and human are rougly the same, then a mouse will have 30 million neurons and 30 billion connections in total. You would need a 64bit integer to give each neuron a unique id. You will also need 64bit integers for each connection. So that is 64*(30 million + 30 billion) ~ 240 GB of memory just to store the map. This doesn't included other variables (like activation thresholds) for processing.
And as you say that's just for mapping. Some self promoting media scientists have suggested in the past that we'll have computers that surpass human intelligence by 2040 because they extrapolate Moore's law and assume that a single neuron is the equivalent of a single byte.
When in fact a single neuron is an amazingly complicated machine. A good simulation of a real neuron is still far more complicated and capable of far more computation than a whole artificial neural network that uses simple integrators with activation thresholds.
Don't think one byte per neuron, think one core per neuron. And then wonder how you are going to have each core connect to thousands of other cores.
Back to the title of the OP though, here's something to think about. Say your mind was scanned and the data was uploaded into a fantastically large computer and then simulated. How would that computer sense or act? And would you be looking at that computer thinking that you are now uploaded into a machine or would you be aware that you are still standing there looking at it and that you still exist? You certainly wouldn't want to then kill yourself thinking that you were now in the machine.
(January 29, 2015 at 2:56 am)FallentoReason Wrote:
(January 29, 2015 at 2:29 am)Aoi Magi Wrote: No I believe emotions are nothing "more" than process based mechanisms just like everything else in our brain, and thus can be mapped to a computer as well. For example: http://www.wired.com/2012/06/google-x-neural-network/
As you can see, the AI in the link above, developed a likeness/fondness/attachment/attraction/whatever towards cats. Now we might be able to deduce why exactly that happened, because that system is relatively simplae compared to our brains. But the simple fact that this happened is proof that our emotions can be and in all likelihood are process based.
You're reporting something that isn't there.
Quote:However, Google’s latest offering appears to be the first to identify objects without hints and additional information.
They've taught a machine to do a categorical task. This isn't to do with anything "emotional" or related to developing a "likeness/fondness" etc. like you've dressed it to be. And it's certainly nowhere near the substance of an experience.
I am not saying machines have emotions at the moment, however realize this that emotions are very complex outcomes of an even more complicated process. But in it's simplest form it still builds upon your own preferences. If you prefer chocolate and someone gives it to you, you feel happy, if someone takes it away from you, you feel sad, e.t.c. The experiment and several machine learning systems have already shown that it is possible for machines to have "preferences" as well, without the need for someone to hard-code it in. This is the basis upon which emotions can be built. How the expression of that is handled is another matter altogether.
Quote:To know yet to think that one does not know is best; Not to know yet to think that one knows will lead to difficulty.
- Lau Tzu
Join me on atheistforums Slack (pester tibs via pm if you need invite)
January 29, 2015 at 5:23 am (This post was last modified: January 29, 2015 at 5:25 am by bennyboy.)
Do I have to be the one wet rag that points out that wormy behavior is not necessarily the same as actual wormy consciousness?
(January 29, 2015 at 3:48 am)I_am_not_mafia Wrote: When in fact a single neuron is an amazingly complicated machine. A good simulation of a real neuron is still far more complicated and capable of far more computation than a whole artificial neural network that uses simple integrators with activation thresholds.
And that's just the neurons. Now, try modeling the fluid dynamics between synapses as neurotransmitters are released and received, as hormones arrive through the blood stream, etc.
January 29, 2015 at 6:39 am (This post was last modified: January 29, 2015 at 6:44 am by Alex K.)
(January 29, 2015 at 3:48 am)I_am_not_mafia Wrote: Don't think one byte per neuron, think one core per neuron. And then wonder how you are going to have each core connect to thousands of other cores.
If I had to guess I'd say we will see dedicated chips with hardwired neuron units, which should be vastly more efficient than having an ordinary CPU with external memory simulate them. But I may be wrong, and this may be a hopeless overkill since biological neurons are very slow in comparison.
The fool hath said in his heart, There is a God. They are corrupt, they have done abominable works, there is none that doeth good.
Worms can carry on living in two separate halves, right? Imagine if that happened to a human, that suddenly you are two people. And not attached to each other, but you somehow get dissected in such a way that each half can exist independently. This would almost certainly require artificial organs for one half.
Sure, this might be all bollocks. But the idea of consciousness splitting into two, and then both of them are you... sort of... that's scary.
Feel free to send me a private message.
Please visit my website here! It's got lots of information about atheism/theism and support for new atheists.
January 29, 2015 at 2:39 pm (This post was last modified: January 29, 2015 at 2:43 pm by Surgenator.)
(January 29, 2015 at 3:48 am)I_am_not_mafia Wrote:
(January 29, 2015 at 12:37 am)Surgenator Wrote: Just thinking about the memory space needed for the mapping. Human brain has ~100 billion neurons with each neuron makes over 1000 connections. A mouse's brain is ~0.03% smaller in mass than a human brain.
If we assume the neurons between mouse and human are rougly the same, then a mouse will have 30 million neurons and 30 billion connections in total. You would need a 64bit integer to give each neuron a unique id. You will also need 64bit integers for each connection. So that is 64*(30 million + 30 billion) ~ 240 GB of memory just to store the map. This doesn't included other variables (like activation thresholds) for processing.
And as you say that's just for mapping. Some self promoting media scientists have suggested in the past that we'll have computers that surpass human intelligence by 2040 because they extrapolate Moore's law and assume that a single neuron is the equivalent of a single byte.
When in fact a single neuron is an amazingly complicated machine. A good simulation of a real neuron is still far more complicated and capable of far more computation than a whole artificial neural network that uses simple integrators with activation thresholds.
Don't think one byte per neuron, think one core per neuron. And then wonder how you are going to have each core connect to thousands of other cores.
Back to the title of the OP though, here's something to think about. Say your mind was scanned and the data was uploaded into a fantastically large computer and then simulated. How would that computer sense or act? And would you be looking at that computer thinking that you are now uploaded into a machine or would you be aware that you are still standing there looking at it and that you still exist? You certainly wouldn't want to then kill yourself thinking that you were now in the machine.
I never implied that one neuron was one byte. That sounds absurd. From my biology class days (long ago), I remember that individual neurons are pretty stupid. So a whole cpu dedication is overkill.
I should also mention that the mapping calculation assumed that you couldn't take into account some symmetries or locality (e.g. neurons only bind to local neurons). If you can, then the memory and processing cost will go down.
(January 29, 2015 at 1:25 am)FallentoReason Wrote:
(January 29, 2015 at 1:04 am)Surgenator Wrote: How do you know experiences aren't process-based?
Because if we imagine mental processes as a series of pulleys, ropes and cogs, and we blew this up to a proportion big enough where we could walk in and inspect it, could you then point to where the experience of me 'liking the colour blue' is happening? Can physical processes even represent the proposition 'blue is my favourite colour'?
Similarly, sensations/experiences in relation to our brain are like the speed a car can attain from its motor. Except, a car's mechanical processes can't ever feel the sensation of "going fast", whereas our "motor", the brain, *does* give us sensations related to its processes ergo we can experience. There's no way to reduce this intrinsic difference about us into a physical explanation, because it's simply not a process. It's something *more*.
(January 29, 2015 at 5:23 am)bennyboy Wrote: Do I have to be the one wet rag that points out that wormy behavior is not necessarily the same as actual wormy consciousness?
True, but nonetheless, this part is awesome...
Quote:Amazingly, without any instruction being programmed into the robot, the C. elegans brain upload controlled and move the Lego robot.
Even if the open windows of science at first make us shiver after the cozy indoor warmth of traditional humanizing myths, in the end the fresh air brings vigor, and the great spaces have a splendor of their own - Bertrand Russell