OK he is assuming the processing speed will increase exponentially. I know that he mentions Moore's Law coming to an end as an objection, but he doesn't actually address that. He just seems processing speed as equal to better intelligence.
As a neuroscientist, he really really should know better. He knows how many neurons there are, how many synapses, the connectivity between neurons, how many cells there are in a single dendritic tree. All this performs computation. The brain is massively parallel. There are many problems that we just cannot ever hope to compute within the lifetime of the universe on processor speed alone. Yet parallel processing is difficult for conventional computing platforms, even just for a handful of processors or threads.
He is assuming that processing speed will always continue to increase. Why make this assumption? Why extrapolate the curve? As I said, exponential functions in nature saturate into sigmoid functions. It's such a classic mistake to make. This is why you get asset bubbles and people think that house prices will go up forever. He completely forgets about the economy and the resources it needs to continually expand. Computers need rare earth minerals. They need energy to run. They need to be cooled, which also requires energy. They need space.
He is making assumptions about the economy and society. Funding for research is not automatic because funding is limited. Every discovery opens up a new area of search space and this is where the funding goes. But the economy does not fund other areas which may be just as a lucrative because people don't even realise or appreciate what can be done. For example, Big Data is all the rage right now, not embodied robotics. But this is because we are currently drowning in data from the Internet.
This is the thing about the progress of technology. In even just a few short years it can be next to impossible to predict due to the myriad of ways that both the economy and society changes.
The AI solutions we have now based on a data intensive economy are in no way adequate for creating the kind of AI that he is talking about. A strong AI needs to be embodied for it to understand something otherwise you have Searle's Chinese room problem. Or the example I like to use, imagine putting a baby into a sensory deprivation chamber, sticking tubes into it and letting it grow until it's 20 years old. You have the wetware available and functioning, but it could not then understand anything about the outside world because it never lived in it. You can only be as intelligent as your environment ever allowed you to be. And with an embodied agent you then have other limitations such as materials research and how you power it.
The way Harris just says, imagine replacing a room for of 20 Yale graduates with a super Artificial Intelligence, then it will just continue developing etc. How? How will that AI work? We don't know. So how can we assume that it will exponentially increase in intelligence? Maybe it will be a service bot that gets joy from vacuum cleaning the carpet.
The least said about America, China and Russia deploying Artificial Intelligences that can create world wars the better.
He talks about the AI being an extension of ourselves and plugging into our brains and its values. He is making assumptions about the form of AI. Is it going to be like an embodied animal with drives and needs? Is it going to be part of a data farm trawling through data from the Internet without actually understanding it? He just glosses over what he means about the AIs values. Fact is we can't know because we just don't know what kind of AI is possible.
The limiting factor in all of this, is our ability to understand as humans. It just won't ever be that fast. We can't just create Artificial Intelligence. We are limited by our ability to measure the brain and to process that data. You just have to compare it to the efforts used in understanding the genome and how many papers need to be written about a handful of genes without really even saying much.
Believe me there are a whole load of problems in AI that we don't even know how to go about solving.
I really want to go into this in more detail but there's just so much wrong with what Harris has said it's just too long for a single post.
As a neuroscientist, he really really should know better. He knows how many neurons there are, how many synapses, the connectivity between neurons, how many cells there are in a single dendritic tree. All this performs computation. The brain is massively parallel. There are many problems that we just cannot ever hope to compute within the lifetime of the universe on processor speed alone. Yet parallel processing is difficult for conventional computing platforms, even just for a handful of processors or threads.
He is assuming that processing speed will always continue to increase. Why make this assumption? Why extrapolate the curve? As I said, exponential functions in nature saturate into sigmoid functions. It's such a classic mistake to make. This is why you get asset bubbles and people think that house prices will go up forever. He completely forgets about the economy and the resources it needs to continually expand. Computers need rare earth minerals. They need energy to run. They need to be cooled, which also requires energy. They need space.
He is making assumptions about the economy and society. Funding for research is not automatic because funding is limited. Every discovery opens up a new area of search space and this is where the funding goes. But the economy does not fund other areas which may be just as a lucrative because people don't even realise or appreciate what can be done. For example, Big Data is all the rage right now, not embodied robotics. But this is because we are currently drowning in data from the Internet.
This is the thing about the progress of technology. In even just a few short years it can be next to impossible to predict due to the myriad of ways that both the economy and society changes.
The AI solutions we have now based on a data intensive economy are in no way adequate for creating the kind of AI that he is talking about. A strong AI needs to be embodied for it to understand something otherwise you have Searle's Chinese room problem. Or the example I like to use, imagine putting a baby into a sensory deprivation chamber, sticking tubes into it and letting it grow until it's 20 years old. You have the wetware available and functioning, but it could not then understand anything about the outside world because it never lived in it. You can only be as intelligent as your environment ever allowed you to be. And with an embodied agent you then have other limitations such as materials research and how you power it.
The way Harris just says, imagine replacing a room for of 20 Yale graduates with a super Artificial Intelligence, then it will just continue developing etc. How? How will that AI work? We don't know. So how can we assume that it will exponentially increase in intelligence? Maybe it will be a service bot that gets joy from vacuum cleaning the carpet.
The least said about America, China and Russia deploying Artificial Intelligences that can create world wars the better.
He talks about the AI being an extension of ourselves and plugging into our brains and its values. He is making assumptions about the form of AI. Is it going to be like an embodied animal with drives and needs? Is it going to be part of a data farm trawling through data from the Internet without actually understanding it? He just glosses over what he means about the AIs values. Fact is we can't know because we just don't know what kind of AI is possible.
The limiting factor in all of this, is our ability to understand as humans. It just won't ever be that fast. We can't just create Artificial Intelligence. We are limited by our ability to measure the brain and to process that data. You just have to compare it to the efforts used in understanding the genome and how many papers need to be written about a handful of genes without really even saying much.
Believe me there are a whole load of problems in AI that we don't even know how to go about solving.
I really want to go into this in more detail but there's just so much wrong with what Harris has said it's just too long for a single post.