RE: Should driverless cars kill their own passengers to save a pedestrian?
November 17, 2015 at 9:40 pm
(This post was last modified: November 17, 2015 at 10:00 pm by Aroura.)
(November 17, 2015 at 8:07 pm)IATIA Wrote:(November 17, 2015 at 12:31 am)Aroura Wrote: ... I agree that this is not some moral issue. Like the train track dilema. But this one fails to give me much of a dilema. The car will do what it is programmed to do, which will be to try and avoid killing anyone.
As you say, "The car will do what it is programmed to do", but that is where the moral issue resides. There is a programmer or team of programmers, that must consider the outcome of the decision making process that will ultimately control the vehicle. I do not know if there are any real programmers on board, but programming must encompass worst case scenarios. In the case of computer software and games, generally the rule of thumb is that any input which fails to align with the intended programming is just shoved in the bit bucket and the software will resume polling inputs. However, in the case of cars and planes, it would be disastrous to ignore unforeseen inputs, so worst case scenarios must be considered, albeit they can usually be grouped into similar algorithms, but that is where the problem comes in. Simple algorithm, something in way, dodge it. OH NO! Cliff, too late. It does not matter how fast the computer is because it must abide by physics and causality, i.e. reason for stopping and stopping distance. Even though it can calculate, for sake of argument, the exact stopping distance, it is still obligated by the laws of physics. This does not consider tire wear, that patch of oil (hit one on a bike once, no fun) or any other unknowns. Because the car cannot think and can only do what is programmed, ultimately the programmer has to consider the decision to program the safety of the occupants or the safety of the greater numbers or whatever. This is where the morality lies and ultimately the obligatory lawsuits. It is obvious from some of the posts, that some posters did not really read the article linked at the beginning. So yes, from the programmer's stand point, this is exactly the trolley problem.
Ok, my husband is a programmer and working to become a computer engineer (well, to be completely honest, he is in his senior year of computer science, but he is currently working on his senior project, which will be used for this kind of decisive system in robtics: ROS.org. He's also been programming computers since the 80's) has some things he'd like me to relay in this thread. First, he just finished taking an ethics in technology and engineering class, so this kind of stuff is all fresh in his mind. For the following please take into account I'm trying to say what he says, but in words I understand better. So his thoughts on the subject are that no engineer is going to program the car to swerve, because swerving will open up a whole host of unpredictable outcomes. Swerving might cause the car to aim towards another car, or another pedestrian, etc.
It will have as simple a set of programs as possible. Much like EXISTING technology that AUTOMATICALLY stops new cars if they sense something in front of them, so you as a human don't have to, a Google car will most likely just do it's best to stop. Even if it does not successfully stop before colliding with the pedestrian, it will do a better job than a human being of sensing the object or pedestrian, and stopping in time. The speed will be far more reduced than it would be if a human driver were the one forced to react to the same situation.
In short, it would hit the pedestrian, if indeed it did not have time from sensing the pedestrian to come to a full stop, but it would react so much quicker than a person that chance of injury or death would be greatly reduced from the same outcome with a human driver, for both the pedestrian AND the driver.
So, my husband says there is still no moral dilemma, not really. (Also, he said, what if there is a whole family in the car? Or an oncoming car or even a buss full of nuns? lol. This is why automatic cars will not be programmed to swerve).
Current technology for the situation you are talking about exists, and is becoming mass use, and it shown to lower the death toll of drivers and pedestrians. That's why so many new cars have automatic breaks. The technology you are using for your dilemma scenario poses no dilemma that he can find. It is widely agreed that computers react faster and are much more likely to save the life of the driver and the pedestrian with automatic breaking than with a human driver. Deaths will still happen, though. That's a dilemma for driving itself, not for automatic cars.
Sorry that was a bit rambly, my husband is talking to me while I type, and he's always hard for me to translate, lol.
Oh, he just sent me this table for how engineers determine some ethical situations as well.
Workable Ethical Theories
So....that's my response as well. I have to listen to the man constantly on topics like this, and the truth is....I agree. Not just because he's my hubby (although he may be my hubby because we tend to agree on stuff like this, if you see what I mean).
Updated to correct some confusing wording.
“Eternity is a terrible thought. I mean, where's it going to end?”
― Tom Stoppard, Rosencrantz and Guildenstern Are Dead
― Tom Stoppard, Rosencrantz and Guildenstern Are Dead