(November 4, 2016 at 5:16 pm)abaris Wrote:(November 4, 2016 at 5:12 pm)Mathilda Wrote: Yeah but how will AI be a threat?
Bears don't just see us as ants, they see us as food. I don't see much of a bear apocalypse going on right now.
I'm quoting the quote to add another question. How likely is it in this fictional scenario that an AI, created by humans, surpasses human intellect? How should that work, apart from processing power, energy and self repair routines needed, since the AI is a threat in this scenario. So it can't expect to be serviced by humans.
Ah good. A proper question. Thank you.
What is intellect? Googling I get "the faculty of reasoning and understanding objectively, especially with regard to abstract matters."
We can already do this now with Mathematics. Wikipedia-size maths proof too big for humans to check but this is for a specific purpose. It's not generalisable. It's a mere computational search. There's no adaptability, no learning, no memory or any myriad of other functions that the brain performs. And now Mathematicians are asking whether it's actually a useful tool because we, humans, cannot understand it. Our brains are the limiting factor. Tools need to be useful. If they aren't, then they won't be used.
Intellect is a very small aspect of intelligence. Intelligence also includes the ability to control your body, to interpret sensory data, store and save memory, generalise over learned experiences. But for this you need to be embodied some how. That can be in an virtual environment like a computer game for example, but as with GIGO (Garbage in Garbage Out) you can only process what you have been given.
So let's take the idea of letting an AI loose into the wilds of the internet. All that data to play with. How does it reason about all of that if it doesn't lead a human life? More importantly, why would it? What drives it? Well perhaps a goal we have given it, in which case it's a tool that's either useful or it isn't. Maybe we want to give it the ability to control some resource like buy and sell shares, we do that now and yes there are concerns if everyone does it, but again, it's a tool, it has to be useful. And ultimately like with every human, even a tool is accountable to someone else.
Let's take Data from Star Trek. An embodied artificial intelligence with super human intelligence. Why should we expect an AI to be any different to any other animal or human? It has the same problems that we all do. It has a limited body with competing needs, it's probably going to deal with them in a similar way, with emotions, drives, instincts etc. Some humans are more intelligent than others but we're still accountable, we still have to obey laws.
If people think we should be afraid of large, super intelligent organisms with the capacity change the world, then why not be afraid of corporations? They can be made up of hundreds of thousands of brains all specialising in particular areas. They are a far more immediate threat than AI yet we can still control them. They are still accountable.
Intellect, no matter how advanced is limited by its environment.