(April 6, 2018 at 8:05 am)ignoramus Wrote: 1) Do we actually have any AI's that are more than superfast calculators?
No, although whatever form strong AI takes you can could always refer to it as 'nothing more than' something. The traditional curse of AI is that whenever you get something to work it no longer becomes AI but a field in its own right.
The problem is that people do not make the distinction between strong AI and weak AI. Almost everything that we see as AI is actually weak AI. That means that it has very narrow applicability. People and animals on the other hand have brains that are very good at generalising. The true challenge of AI is to have it both able to scale up and to cope with many different problems without knowing a-priori what kind of problem it needs to tackle. If you can do this then you are some way to making strong AI. Unfortunately there is extremely little progress in this regard. It's a vicious cycle in that because there is so little progress there is little interest in funding such research because you are unlikely to get anything useful back. But at least with the rise of AGI (Artificial General Intelligence), which is basically AI before the term became cheapened, we are starting to research it again.
(April 6, 2018 at 8:05 am)ignoramus Wrote: 2) What is the true end goal of an AI? To try to solve world problems beyond our intelligence levels? Or just to pass the turing test?
To make computers and robots more useful. Computers are very fast idiots and need every instruction spelled out for them. By making them more intelligent they can do more for us with less effort on our part.
(April 6, 2018 at 8:05 am)ignoramus Wrote: 3) Do you personally believe that an AI will one day be given citizenship? And why the preoccupation with anthropomorphising? Do we just like playing pretend God?
Not in my lifetime. Not unless there are major advances in technology that we can't even imagine yet. The human brain requires about a 100 watts of energy even if we could replicate it in silicon, we'd need gigawatts of electricity. Yes, we could get away with less but then we'd lose out on scalability. Add to that strong AI needs to be embodied, otherwise it's just another form of statistical or symbolic data processing.
Natural intelligence is part of a loop forming it and its environment. It senses its environment, acts, thereby changing the environment and then starts again at beginning of the loop sensing the new environment. What this means is that your AI needs another bit of code to observe it and change it accordingly then it cannot be strong AI. In the same way we don't need someone cutting our heads open, scanning our neurons and sticking electrodes into our brain in order for us to think. But that's effectively what happens with narrow AI, and very few people even have a clue as to how to go about creating AI that like this.
Add to that Moravec's paradox in which low level sensori-motor skills require far more computation than the kind of problems that we consider the height of intelligence, such as being able to play chess.
If you want to see the shape of strong AI of the future then look at Boston Dynamics. We're going to have to start by recreating ever more complex artificial animals first (otherwise know as animats). But for this to happen there needs to be an economic advantage to doing so. Big data has taken off because money can be made from it. If asteroid mining became viable for example then we could see strong AI develop rapidly.
(April 6, 2018 at 8:05 am)ignoramus Wrote: I think computers will become more powerful, but in a scientific sense.
Thinking things like Skynet to me will always be science fiction, like all the alien invasion movies.
Agreed. Skynet wouldn't ever be able to understand the world if it is not embodied in the world.