(November 4, 2016 at 1:27 pm)Rhythm Wrote: 2. We will continue to improve our intelligent machines.
3. We are not near the summit of possible intelligence.
Thanks, I was trying to remember 2 and 3 but didn't want to watch that video again.
2. We might not be able to indefinitely improve our intelligent machines. The assumption he is making is that progress will speed up exponentially. He cannot make that assumption. We are limited by how long it takes us to measure, to understand, to experiment, to publish etc. We have so much that needs to be done to create the kind of AI that he is talking about, we cannot make assumptions about how society will look in a couple of hundred years time. Maybe we're in a golden age right now and it will be a steady decline as we don't make the transition to a cheaper and more abundant form of energy and the cheap oil runs out. Maybe we'll all be living in a theocracy or a fascist dictatorship. Maybe corporate capitalism will fail. There is plenty of reason to recognise it as being unsustainable in its current form. Maybe resource wars or a pandemic will destroy our increasingly fragile just in time society. He's making the assumption that the society and economy we have now will be with us in the future. Even if our economic system continues smoothly on for the next few hundred years we still can't predict what the needs of that society will be.
3. The space of possible intelligence is limited by your environment. The AI that we see now is statistical trickery. Google translate does not actually understand what you are saying. There is absolutely no way that it can. This is because it doesn't actually use that language that relate it back to itself. Understanding comes from being embodied in an environment.