RE: Machine Intelligence and Human Ethics
May 27, 2019 at 5:16 am
(This post was last modified: May 27, 2019 at 5:34 am by I_am_not_mafia.)
Don't worry about it. The current fad with machine learning is based on sorting through large amounts of static data harvested from our computer infrastructure in society. There is virtually no funding for the kind of AI you need for autonomous embodied robots that act like living agents. We need an entirely different approach for the latter.
Add to that there's also very little progress in the other technologies for sentient, sapient conscious embodied AI such as the actual robotics and sensors.
We first need to know what the technology will look like before questions of morality become relevant. And only until then can we start to answer them.
Don't get taken in by all the hype regarding Artificial Intelligence. 99% of it is bullshit used to get funding and to promote individual's careers as 'visionaries' and 'fuiturists' without having to deliver anything.
The AI you see now is weak AI when what you are referring to is strong AI and there has been virtually no progress in that. Except we now have a new name for it, AGI (Artificial General Intelligence). But that's about it.
What I can tell you though is that the kind of AI we would need would have to solve the same problems as animals, namely maintaining homoeostasis and being an autonomous agent embodied in an environment as part of a sensorimotor loop. There has been virtually no progress in this research since the 90's except for the work done by Boston Dynamics. The action controller will require engineering self-organising systems and this is what I research myself (peer-reviewed publications in scientific journals and conferences). But this is an extremely difficult area that very few people have any idea about how to even start developing. People haven't even started asking the right questions yet. I don't have much idea myself and I've been actively researching this area for over the last 20 years. But at least I know the right questions now.
Add to that there's also very little progress in the other technologies for sentient, sapient conscious embodied AI such as the actual robotics and sensors.
We first need to know what the technology will look like before questions of morality become relevant. And only until then can we start to answer them.
Don't get taken in by all the hype regarding Artificial Intelligence. 99% of it is bullshit used to get funding and to promote individual's careers as 'visionaries' and 'fuiturists' without having to deliver anything.
The AI you see now is weak AI when what you are referring to is strong AI and there has been virtually no progress in that. Except we now have a new name for it, AGI (Artificial General Intelligence). But that's about it.
What I can tell you though is that the kind of AI we would need would have to solve the same problems as animals, namely maintaining homoeostasis and being an autonomous agent embodied in an environment as part of a sensorimotor loop. There has been virtually no progress in this research since the 90's except for the work done by Boston Dynamics. The action controller will require engineering self-organising systems and this is what I research myself (peer-reviewed publications in scientific journals and conferences). But this is an extremely difficult area that very few people have any idea about how to even start developing. People haven't even started asking the right questions yet. I don't have much idea myself and I've been actively researching this area for over the last 20 years. But at least I know the right questions now.