RE: Artificial Intelligence
July 17, 2015 at 3:16 pm
(This post was last modified: July 17, 2015 at 4:27 pm by The Grand Nudger.)
(July 17, 2015 at 2:47 pm)I_am_not_mafia Wrote: We don't know that.Yes, we do. If you -purpose built- a human being, you would leave out vestigials, and you would scale the human being appropriately - you might rearrange few bits while you were at it. Because we are -evolved-...rather than designed, no such consideration was made. That it takes longer to evolve a brain that it does to build a better chipset is -fairly- well evidenced.......
In terms of wattage the processing power of the brain is extremely efficient compared to an equivalent super computer which will require megawatts of electricity instead of about a hundred watts.
Unfortunately, we can only side in favor of our brains by neglecting to consider how that power is provided. Not that the efficiency of the brain -itself-, in a vacuum, has escaped you lot, efficient cpus are a definite thing and people have long been looking to nueral architecture for ideas there. Nevertheless, energy must be supplied (and in the case of our brains a fairly robust amount of chemical inputs in addition - though we could boil that back down to energy as well, sure), and living organisms have shit conversion....just atrocious. That's why we use machines in the first place. More work over less time for a smaller amount of energy put in. Imagine how many bowls of rice it would take, for example, if the internet where a room full of chinese people doing all of this on pen and paper (and supposing we could actually get that much work out of them...imagine how shitty an internet it would be..lol). In my own life, imagine how much time and energy it would take to let the plants do their own breeding and cultivation? No, to hell with that - we apply machine work to shore up a drag assing biological implementation.
Perhaps, though, we simply use different measures of efficiency, etc?
Quote:The size of our brains comes at quite an evolutionary cost in terms of difficulty of childbirth and an extremely high metabolic requirement. If we couldn't make use of our brains or didn't need them then evolution would have selected for smaller brains rather than larger ones.Sure, "more" in the general, but that "more" may be less than we have.
It's true that you can achieve a lot with very little. But if you want intelligence to scale and to be able to adapt to a wider variety of environments then you need more.
NS won't actually weed out a big brain just because it isn't fully leveraged. It would only weed out a big brain (in favor of a smaller one, for example) if that big brain was failing to deliver. Directed, human engineering does that...reduces the scale of something along known metrics to increase it's efficiency regardless of the sufficiency of it's performance envelope, NS is -incapable- of doing that. So long as the big brained monkey keeps having big brained babies, NS isn't going to do anything about that at all just because it's oversized or an innefficient (or egregious) use of resources by some other measure....it doesn't know about -any- measure. It makes no considerations, it's improving upon nothing. We are simply what remains. There's no reason to conclude that evolution would have selected for smaller brains just because they're easer to build, and we don't use our whole brain. We -don't- use our entire brain, in that sense....and evolution seems to have selected -for- our bigger brains. Clearly, the situation is a little more complicated than "big, not fully leveraged brain bad - small, fully leveraged brain good".
Quote:There are many different definitions of intelligence, some more useful than others. My own working definition is that it allows adaptation to an unknown environment. I believe that a formal definition is possible based on non-equilibrium thermodynamics and I am currently developing models to try and demonstrate this.Until you can establish that all of this doesn't -also- describe our own native system...........it's difficult to see the line as anything but arbitrary. Why does hitting my brother on the arm produce anger in him, but not in me? Didn't evolution create a function which we've labeled "anger" in ourselves? What should or shouldn't produce "anger", in your example, is a simple list of conditions....and it's difficult to see why that would be hard to scale. There's no point in criticisizing the anthropomorphic urges of others if you're going to follow it up, in the same breath, with an anthropomorphic assertion like "That's not anger"......you just said that it was......you mean it isn't "human anger" - but even amongst human beings "anger" is amorphous (que the difference between what angers me and what angers my brother). So, agreed - that's not human anger. Whats the problem? We're talking about creating artificial intelligence, not artificial humans, right? Is there some requirement that intelligence be human in order to avoid being a "trick"? Where does that leave all of the other examples of intelligence in our world? Is it all trickery and anthropomophism - and again why is our own model receiving such preferential treatment - how has it escaped the axe with which you hope to chop up the robot?
Most so called AI is trickery because it tries to simulate the effect rather than to have it arise endogenously. It's like creating a function and labelling it "anger" and writing it to produce a sudden arm movement in a robot if it senses loud noise. That's not anger, that's a function that produces a sudden arm movement if it senses a loud noise. It's us anthropomorphising it that sees it producing anger. But it doesn't scale. What about hitting the robot with a hammer? That should produce anger as well.
My favourite paper in AI discusses this Artificial Intelligence meets natural stupidity
In any case, on the one hand I don't think that AI built to "model anger" is the best representative of AI - but I can see why it could be. Similarly, I don't think that the turing test took us down the right road - might have wasted alot of time, even though we learned plenty about those "tricks" chasing it. I suppose I could sum the majority of my comments up as so: I'm not disputing your understanding of -how- the machines we point to as examples of potential AI achieve their particular feat, it's a hobby of mine, and so I'm aware of how we model these things (and ways we -could- model these things) down to the level of hardware - though, my programming is shit.......- I'm wondering how you've determined that this is fundamentally different from how -you or I or a bullfrog- achieve that same effect. What do you know, that could justify such a line in the sand as to call "anger" a trick when one example achieves the effect, but the real deal when the other achieves the -same- effect? It's not your comp sci I'm picking a bone with, it's your biology.
I am the Infantry. I am my country’s strength in war, her deterrent in peace. I am the heart of the fight… wherever, whenever. I carry America’s faith and honor against her enemies. I am the Queen of Battle. I am what my country expects me to be, the best trained Soldier in the world. In the race for victory, I am swift, determined, and courageous, armed with a fierce will to win. Never will I fail my country’s trust. Always I fight on…through the foe, to the objective, to triumph overall. If necessary, I will fight to my death. By my steadfast courage, I have won more than 200 years of freedom. I yield not to weakness, to hunger, to cowardice, to fatigue, to superior odds, For I am mentally tough, physically strong, and morally straight. I forsake not, my country, my mission, my comrades, my sacred duty. I am relentless. I am always there, now and forever. I AM THE INFANTRY! FOLLOW ME!