Our server costs ~$56 per month to run. Please consider donating or becoming a Patron to help keep the site running. Help us gain new members by following us on Twitter and liking our page on Facebook!
Current time: January 11, 2025, 6:39 pm

Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Should driverless cars kill their own passengers to save a pedestrian?
#73
RE: Should driverless cars kill their own passengers to save a pedestrian?
(November 17, 2015 at 8:07 pm)IATIA Wrote:
(November 17, 2015 at 12:31 am)Aroura Wrote: ...  I agree that this is not some moral issue.  Like the train track dilema.  But this one fails to give me much of a dilema.  The car will do what it is programmed to do, which will be to try and avoid killing anyone.

As you say, "The car will do what it is programmed to do", but that is where the moral issue resides.  There is a programmer or team of programmers, that must consider the outcome of the decision making process that will ultimately control the vehicle.  I do not know if there are any real programmers on board, but programming must encompass worst case scenarios.  In the case of computer software and games, generally the rule of thumb is that any input which fails to align with the intended programming is just shoved in the bit bucket and the software will resume polling inputs.  However, in the case of cars and planes, it would be disastrous to ignore unforeseen inputs, so worst case scenarios must be considered, albeit they can usually be grouped into similar algorithms, but that is where the problem comes in.  Simple algorithm, something in way, dodge it.  OH NO! Cliff, too late.  It does not matter how fast the computer is because it must abide by physics and causality, i.e. reason for stopping and stopping distance.  Even though it can calculate, for sake of argument, the exact stopping distance, it is still obligated by the laws of physics.  This does not consider tire wear, that patch of oil (hit one on a bike once, no fun) or any other unknowns.  Because the car cannot think and can only do what is programmed, ultimately the programmer has to consider the decision to program the safety of the occupants or the safety of the greater numbers or whatever.  This is where the morality lies and ultimately the obligatory lawsuits.  It is obvious from some of the posts, that some posters did not really read the article linked at the beginning.  So yes, from the programmer's stand point, this is exactly the trolley problem.

Ok, my husband is a programmer and working to become a computer engineer (well, to be completely honest, he is in his senior year of computer science, but he is currently working on his senior project, which will be used for this kind of decisive system in robtics: ROS.org. He's also been programming computers since the 80's) has some things he'd like me to relay in this thread.  First, he just finished taking an ethics in technology and engineering class, so this kind of stuff is all fresh in his mind.  For the following please take into account I'm trying to say what he says, but in words I understand better.  So his thoughts on the subject are that no engineer is going to program the car to swerve, because swerving will open up a whole host of unpredictable outcomes.  Swerving might cause the car to aim towards another car, or another pedestrian, etc.

It will have as simple a set of programs as possible.  Much like EXISTING technology that AUTOMATICALLY stops new cars if they sense something in front of them, so you as a human don't have to, a Google car will most likely just do it's best to stop.  Even if it does not successfully stop before colliding with the pedestrian, it will do a better job than a human being of sensing the object or pedestrian, and stopping in time.  The speed will be far more reduced than it would be if a human driver were the one forced to react to the same situation.  

In short, it would hit the pedestrian, if indeed it did not have time from sensing the pedestrian to come to a full stop, but it would react so much quicker than a person that chance of injury or death would be greatly reduced from the same outcome with a human driver, for both the pedestrian AND the driver.

So, my husband says there is still no moral dilemma, not really. (Also, he said, what if there is a whole family in the car?  Or an oncoming car or even a buss full of nuns?  lol.  This is why automatic cars will not be programmed to swerve). 
Current technology for the situation you are talking about exists, and is becoming mass use, and it shown to lower the death toll of drivers and pedestrians.  That's why so many new cars have automatic breaks. The technology you are using for your dilemma scenario poses no dilemma that he can find.  It is widely agreed that computers react faster and are much more likely to save the life of the driver and the pedestrian with automatic breaking than with a human driver.  Deaths will still happen, though.  That's a dilemma for driving itself, not for automatic cars.

Sorry that was a bit rambly, my husband is talking to me while I type, and he's always hard for me to translate, lol. 

Oh, he just sent me this table for how engineers determine some ethical situations as well.
Workable Ethical Theories

So....that's my response as well.  I have to listen to the man constantly on topics like this, and the truth is....I agree.  Not just because he's my hubby (although he may be my hubby because we tend to agree on stuff like this, if you see what I mean). Big Grin

Updated to correct some confusing wording.
“Eternity is a terrible thought. I mean, where's it going to end?” 
― Tom StoppardRosencrantz and Guildenstern Are Dead
Reply



Messages In This Thread
RE: Should driverless cars kill their own passengers to save a pedestrian? - by Aroura - November 17, 2015 at 9:40 pm

Possibly Related Threads...
Thread Author Replies Views Last Post
  Are cats evil beasts that should be killed to save mice? FlatAssembler 34 3654 November 28, 2022 at 11:41 am
Last Post: Fireball
  Does anyone own "The Moral Landscape"? robvalue 191 21657 October 18, 2018 at 4:39 pm
Last Post: vulcanlogician
  My own moral + ontological argument. Mystic 37 12513 April 17, 2018 at 12:50 pm
Last Post: FatAndFaithless
  If You Could Choose Your Own Desires Edwardo Piet 34 4551 November 12, 2016 at 1:43 pm
Last Post: Thumpalumpacus
  Would you kill the person who is about to kill? brewer 63 10421 December 10, 2015 at 2:07 pm
Last Post: Whateverist
  If there were a creator, what would be their limits? Razulxe 2 54 10504 February 19, 2015 at 9:32 pm
Last Post: IATIA
Tongue Just for fun: Make your own "Proof by Anselm" thedouglenz 0 911 June 10, 2014 at 11:01 pm
Last Post: thedouglenz
  Do We Own? Walking Void 43 13730 July 21, 2013 at 4:15 am
Last Post: genkaus
Question One thing that makes you doubt your own world view? Tea Earl Grey Hot 9 3076 July 14, 2013 at 4:06 pm
Last Post: Something completely different
  Do we own our own lives? A discussion on the morality of suicide and voluntary slavery. Kirbmarc 36 15715 December 13, 2012 at 8:08 pm
Last Post: naimless



Users browsing this thread: 10 Guest(s)