(November 4, 2016 at 5:36 pm)Alasdair Ham Wrote:(November 4, 2016 at 5:31 pm)abaris Wrote: How can a thing, designed by human intellect, assembled by human ingenuity, possibly surpass human intellect and ingenuity?
How couldn't it?
Isn't that rather like asking "how could intelligent humans evolve from singe selled organisms?"
Seems like the genetic fallacy from me. Why would intelligent AI in the future be limited by the intelligence of those who created?
Because it takes time. Lots of time. It can't know in advance what is a good or bad solution for whatever it is trying to do without actually testing it. This means applying it to the real world. And as I said before, any single self recursive improving function that overwrites a part of itself is limited to the bit that doesn't get overwritten.
(November 4, 2016 at 5:36 pm)Alasdair Ham Wrote:(November 4, 2016 at 5:31 pm)abaris Wrote: It could process faster, given the right amount of energy and processing power. Computers do that for us already, but it can't actually surpass the intellect available when it's assembled.
Why not?
We're talking about future intelligences where the AIs can think for themselves like humans... only much more intelligent.
Which is far, far in the realms of sci-fi. So what form does this AI take? Is it an android? A disembodied AI on a serve farm? What? What dangers could this pose?
(November 4, 2016 at 5:36 pm)Alasdair Ham Wrote:(November 4, 2016 at 5:31 pm)abaris Wrote: How likely is all of this coming together anytime?
Like anytime soon? Like he said.... all it takes is us to keep going, it doesn't have to be anytime soon.
Let's assume for the sake of the argument that it's definitely going to happen and can't be stopped, how quickly is it going to happen? Surely that's important here. If it happens slowly then we have time to adapt and evaluate its progress.