(November 4, 2016 at 7:19 pm)Mathilda Wrote: As mentioned before, we are the limits. Our ability to understand and to engineer. This is what people don't appreciate, just how fiendishly difficult AI actually is. It's getting late so I will save it for another post but things that we take for granted as intelligent beings, we have no idea how to even go about implementing in an AI.
Mathilda's point here is really worth considering. We're nowhere near "true AI". We can just about program cars to stay in the correct lanes, and they are still confused by white vans. The "AI" in that case is not intelligence in the same sense that humans have it. The computer doesn't know what cars are, it has no concept of driving, or staying between two lines, etc. It's been told that it needs to monitor sensors and perform certain actions if the sensors produce certain outputs.
To get to a state where computers would be "more intelligent" than humans in a human intelligence sense, we'd first actually have to figure out how to do that, and even then, it's not going to be the same thing. At a basic level, any computer AI will operate via code which we can put limits / constraints on. In addition to that, we can put physical limitations (hardware) on AI.
As for self-improvement, well, that's a whole other ball game. We can't even get computers at the moment to self-write advanced computer programs, so to think that once we crack "true AI" we'll just be able to teach this new computer how it works, and then get it to update it's own code, well, I find that rather far-fetched to say the least.