Judge, Jury and Executioner

      1 Comment on Judge, Jury and Executioner

I figured at least some of you are tired of hearing me talk about self-driving cars. I assume at least some of you, once you started reading, will read an article to the end, regardless of whether you like it or not. I tried to get creative for the title of this post in order to appeal to this very specific market segment composed of the overlap between both groups I just mentioned.


So again, self-driving cars are on the front line of debates in my area. And once again, I’ve heard the, honestly, completely ridiculous argument of morality in self-driving cars.

A self-driving car that would have to choose between crashing into two pedestrian and killing them both, or crashing into a wall and killing its driver would choose fewer deaths, and I would never want a car that consciously chooses to kill me!

Or sometimes, it’s a little more philosophical

How should self-driving cars be programmed to act in such a situation, where it chooses between different deaths? Should it choose the fewest deaths, the lower risk of injury, or prioritize the driver at all costs? If it has to choose between a helmet-wearing cyclist and a non-helmet-wearing one, should it endanger the person who is actually respecting the law because they are more likely to survive? How should it choose between endangering an old person and a young one? As long as we cannot answer those questions as a society, self-driving cars should not be in control of who lives or dies!

Maybe this sounds logical to you. Maybe you never stopped to think about it in perspective, or maybe we see things very differently. But I, for one, think this is extremely ridiculous.

Imagine the Black Death outbreak in the 1300s in Europe. People try and try, but they can’t figure out how to fight it. Then, imagine that a promising young scientist (this is hypothetical, so don’t focus on the historical inacurracies or use of wrong-era words) thinks he found a way to completely stop the plague in an individual, through some method similar to modern vaccines, maybe with a component of “saving people who are already infected too”. He makes his proof that it works, but it’s pretty expensive. With some funding to start factories and hire people, he could both mass-produce it, and use the data from recipients to improve the formula and reduce its price within a short time. But people oppose it.

If we help the production, since the cure is very expensive right now, it will only save rich people (at first)! We must delay any production of this cure until we can be sure everyone will be saved!

This is basically what people are saying about self-driving cars. We have a way to incredibly improve road safety, and people are opposing it because, even though it would be a hundred time less likely to die in any car-related accident, this time the car would “choose” the victim. I don’t know the specific stats as to how many people die in car accidents while driving, as opposed to getting hit by a car while not in one, but I know for a fact that even unoptimized prototypes currently being tested by Google and other tech companies are uncomparably safer than human-driven cars, so much that if every single car on the road was replaced by Google cars tomorrow, there would be about as many total accidents as we see fatal accidents nowadays in North America. If you have to choose between having a 1/100 chance of dying while driving, and pedestrians have a 1/100 chance of dying because you hit them, why would you ever not want to switch to a 5/10000 vs 1/10000 system? Yes, if you do get in an accident, the car would be more likely to kill you than the other person, but you’d still be a lot safer overall. And, of course, to actuallly improve the system, which is already better than what we have, you need people to use it to get the necessary data. So you will never fix this “problem” by arguing about it, you are actively working against any fixes by doing that.

I am not arguing for releasing anything new, no matter how dangerous, so that data can be gathered to improve it. But when it’s already safer than what we currently have, there’s no reason not to kill two birds with one stone. I will admit that this was a bad choice of idiom.

Furthermore, this is not the only reason why this whole argument is irrelevant.

Do you know about the trolley problem?

It is a problem in moral philosophy where people have to choose between non-involvement, which would result in the death of five innocent people, and active action consisting in murdering an innocent person, in order to save the other five.

It is a very popular problem when discussion philosophy and the nature of morality, and the problem has hundred of different variants, which may or may not change your answer. Many people are linking the autonomous vehicle “dilemma” with the trolley problem. basically, if the car has to choose between running over pedestrians and most likely killing several of them, or jumping off a cliff (or running into a wall, or whatever other situation that is a “certain death of the driver” situation), should it actively kill its user to save more lives?

I can see how those are similar. But I can also see one more thing. The Trolley Problem, as interesting as it may be for discussion (hey, I liked Trolley Problem Memes on facebook too!), there is one thing that everyone is aware off when discussing it.

It won’t happen. It is not an impossible situation. But it is a ridiculously unlikely situation. Even tying people to train tracks, even though it is a trope, is actually an extremely rare event. I doubt 5 people tied together even ever happened. So except maybe the day a crazy person decides to purposely recreate the trolley problem in real life, in a Saw-like plan, the Trolley problem is not something that will happen. Everyone knows that.

But for some reason, people think that this specific autonomous car situation will not only happen, but be frequent enough to warrant discussion. I’m not stupid, I know that with the sheer amount of cars in this counrty, it will probably happen once every few years or decades. But this doesn’t make it relevant. Those “controversial” situations will happen nowhere near often enough to warrant special rules, let alone a complete ban on the entire technology before it even hits the market. That’s just a classic fear-mongering technique that is being spread by human-driven car companies, luddite citizens and politicians who are jumping on the bandwagon (pun totally intended) to please either group.

It’s textbook manipulation. Mention a scenario that is vague enough to sound realistic, yet specific enough to get people scared, or at least make them argue, to distract from something else.

It’s totally irrelevant.

Then lastly, there’s another important thing to keep in mind.

The car won’t even choose. The car will not choose to kill you or anyone else, or choose to put anyone or anything at risk. There is no programmer who gets to decide, with a carefully coded “if”, who lives and who dies.

Yes, the cars are being coded, and they have algorithms and rules. But they are made with artificial intelligence and machine learning principles. That means that the car is improving itself over time, just like a real driver, but it also means that the longer it is active, the less influence any programmer’s bias will have on its behaviour. It will have metrics, which are rules used to judge the consequences of decisions, and will most likely look something like this:

Minimize human deaths

Minimize noticeable animal deaths (running over a dog is bad, but bumping into flies is irrelevant)

Minimize property damage (including the car itself)

Minimize infractions to driving laws

Minimize honks

Minimize time spent on the road

Minimize usage of fuel

Then, it will try to optimize its own behaviour according to those rules. This is just an example set of rules, they will most likely be more complex in the real thing, and more well thought-out than “random blogger spent 3 minutes of his life thinking about it”-level. I can see at least three major problems with my rules right off the bat, but that’s not the point.

The point is that, by the time this trolley-like situation shows up, probably after years of usage, the algorithms will have unpredictably evolved, and will make a choice to the best of their ability. You might disagree with the choice they made, but there is no programmer to blame. Just like when a real driver encounters a totally new unexpected situation, they react to the best of their ability, and we hope for the best. The car will do its best, and will be wrong sometimes. But for each wrong choice they made in such a situation, they will have prevented thousands of accidents by not falling asleep, drunk-driving, being inattentive, or direspecting the law the rest of the time.

I think we can cut them some slack. Or do some negative reinforcement, and burn the cars who make the wrong choices. That’s just like natural selection, only the cars who evolve into what we like will stick around. Furthermore, it will also contribute to satisfying the gullible mob of angry people who don’t understand how things work and want justice for an accident. But if you can’t ever forgive a car for doing its best in a situation it didn’t cause, then I hope you never try to justify your own mistakes by blaming circumstances. That would be hypocrisy, or maybe double standards against computers.

So this “dilemma” is completely made-up, illogical, irrelevant, and ignorant of how self-driving cars actually work.

Please stop spreading it. There are around 32 000 – 38 000 deaths by car accidents per year in the United States only (Leading non-disease related cause of death, closely followed by suicide, the data is not very complete, but I pieced together this approximation from here, here, here, and here). This is almost 4 deaths every hour. This means that, assuming the Google car accident rate can be maintained or improved by going widespread (the huge majority of their accidents are caused by human drivers around them, so wouldn’t happen in a self-driving cars only society), every time you delay the adoption of self-driving cars by an hour, you kill a couple people.

Of course, if you are the nihilistic or anti-natalist type, then feel free to keep arguing. You are probably the only groups getting any moral victory here.

  • Kingfisher

    I agree that the “dilemma”, as usually presented, is complete nonsense. We have nightmares about being faced with impossible “choices”, but unless you are a surgeon you’ve probably never faced one. The truth is, we have built things like policies to help us make choices, and give advice like “when in doubt, read the instructions”. These exist not so much to help us make the right choice, as they are to make a choice.

    There is one concern with self-driving cars, but it has more to do with machine logic than philosophical logic. It is that machine logic has to avoid contradictions at all costs, because an unchecked contradiction will cause the machine to seize as surely as two gears attempting to turn against each other. So when checking the algorithms that control a self-driving car, you have to make sure that there is no possibility the car will try to turn left and right at the same time.