(July 13, 2015 at 2:55 am)excitedpenguin Wrote: Do you think humanity will ever produce artificial general intelligence? Do you think such an intelligence would be able to advance at an unimaginable rate, past a certain point? Would it solve all of our problems? Would it transform our world, as we know it, forever? Or would it be the end of us?
Don't hesitate to let me know what you think! Leave a reply!
In a word no.
In several words ...
First, Moore's law won't last long enough to give us the processing power that we need. The brain has far more connectivity than we can ever hope to achieve with out current architectures in silicon. Not only that but we'd also have to have our own version of evolution as well to out how to set all the parameters. This creates many generations of a large population of intelligences, so that's several orders of magnitude even more processing power required.
Second, even if we had all the processing power we need, we still need to understand how the brain works and how to create our own versions. This not only requires far better ways of measuring what goes on inside our heads, we also need to understand it ourselves and that will take an extremely long time.
Thirdly, there is a fundamental problem with the concept of artificial general intelligence. Imagine you had a living baby that you could hook up inside a black box, and kept it alive until adulthood and stopped it hearing, seeing, feeling anything except the digital information you provided it through an interface. It would not understand the information it was being given because it could not relate to it. This is what people are expecting from computers. Computers don't live human lives.
Fourthly, 95% of AI that you think is intelligence is actually just trickery and smart computing. Chatbots that pass the Turing test by pretending to be a paranoid schizophrenic for example don't scale. They work in very limited ways through trickery. There are some really fundamental questions that we've been struggling with for many decades that stops us from achieving true, strong AI. Simple functions that we take for granted as intelligent beings.
What we will probably see is animal like intelligence in situated robots, with robotic emotions, self awareness and consciousness. But that's not the artificial general intelligence that people are expecting and which you see in Star Trek with the ship's computer. Such computers may be able to respond intelligently to voice commands, but it won't actually understand what you are saying. At most what you are doing with the big data approach is a statistical analysis.
(I'm a professional researcher in AI)