(November 4, 2016 at 11:50 am)Rhythm Wrote: What laymans assumptions would those be, for the laymen among us?
This isn't the only time that Sam Harris has talked about the dangers of AI. I have been talking about Sam Harris in general, not just his Ted Talk. I assumed that this was in his Ted Talk but he didn't mention it there.
Can We Avoid a Digital Apocalypse?
Quote:With AGI the most powerful methods (such as recursive self-improvement) are precisely those that entail the most risk.
He fundamentally does not understand that you cannot have recursive self improvement without some part of the program staying constant. Therefore it is limited. Otherwise it's like poking yourself in the stomach and saying that it's a self poking finger.
If you have the ability for the program to write over itself entirely, then it cannot improve. The first time it creates a less optimal or broken solution it won't be able to continue improving. You can only do this if you are prepared to have multiple artificial intelligences become useless the moment you deploy them. It can't know in advance that a solution will fail dismally until it tries it. Basically what he is describing is evolution.
We already have artificial evolution. There is nothing magical about this. It's just a form of parameter search. But it's an absolute essential step that requires ever longer processing time the more complex the system. For example, I can run a simple three layer biologically plausible neural network on my computer. It can work really fast and can do strong AI. But to get that I have to create a whole population that evolved over many generations. This stage takes many weeks evolving 24/7. And that's assuming that this is the final version.