Understanding our clients businesses

The moral ethics of the self driving car

Admit it, somewhere we’re all dreaming about self driving cars, drone grocery drops and household robots who can turn us humans into lazy moguls on a fixed income. After the recent deadly crash with a self-driving Uber it seems we’re not quite there yet… or was that incident just part of the learning curve? The difficulties of our AI driven future life lie in a number of elements currently underexposed in mainstream media. The Uber incident illustrated just that. "Who will police behavior in digital spaces or do we relinquish decision making to AI?” was one of the subjects on the agenda of the Future Laboratory we recently attended in London.

Deciding who is at fault when a machine is at the steering wheel isn’t all that easy. In a crisis situation, humans make quick decisions which are rarely very rational. But when a self driving car is put in the same situation, what are the moral guidelines to be used? After all, the decision process in self driving cars is a software decision.

Should we install a “moral status button” in the car that gives the rider three options? Push 1 for extreme altruism, 2 for utilitarian driving or 3 for extreme egotism? The extreme altruism car would limit the impact on other individuals, regardless of the passengers in the car. The utilitarian driving car would promote the greatest happiness for the greatest number, and thus crash into an 80-year old grandmother in order to save a young family of five in the car. The extreme egotism car would drive into a group of toddlers crossing the street if it could save the only passenger in the car.

Gender balance in household robots seemS quite unbalanced. Siri and Alexa, the first voice activated robots, have women’s names and willingly respond to all of our requests. Perhaps because the developers are mainly men?
Bernard Polet
Founder Indiandribble

Fact is that most of the decisions on the moral of the self driving cars are being solved for you already. 

GAFA (Google, Apple, Facebook and Amazon) are taking care of that. Every time you click “I agree” on the umptieth change of regulations at one of our beloved GAFA’s, you are agreeing with the new moral they are installing in their programs. 

Of course the moral guidelines are written by the people working at GAFA and likeminded companies. And that’s where it becomes interesting. A recent survey amongst 5,500 open-source users and developers resulted in 95% male respondents. Quite an interesting gender balance. But this is only one example. Gender balance in household robots also seems quite unbalanced. Siri and Alexa seem to have set the tone, the first voice activated robots have women’s names and they willingly respond to all of our requests. Perhaps because the developers are mainly men? The same majority that is deciding who your car will crash into when it has to make a choice…

Ok, this may start to look like I’m selling you a piece on GAFA’s male conspiracy…


But, did you know that face recognition doesn’t work as well with people of colour as with white people? It seems facial recognition software has problems recognizing dark skinned faces because its algorithms are usually written by white engineers who dominate the technology sector. These engineers build on pre-existing code libraries, typically written by other white engineers. The face recognition software is geared to focus on white faces and are mostly tested on caucasian people as well. The algorithms the coders use focus on facial features that may be more visible in one race, but not another. What impact will this have on the decisions of our self driving cars?

So here we are. Not only will we soon be sitting in self driving cars, the moral of these cars might be those of white caucasian computer nerds that like submissive women and only recognize a limited number of faces of black people?

So next time you click “I agree”, think twice before you click us all into a new future.

Meanwhile we are looking into the future of robots (both male and female inspired models ;-)) and how they could help us improve the service oriented business we are in. What are the top 10 moral rules you can think of when it comes to a hospitality robot? 

Would you rather be shuttled to an event with a self driving car that knows everything about every subject and is super diplomatic when it comes to political matters? Or would you prefer to be entertained by a hip hop loving car that starts yo-ing you as soon as you’re in your seat :-)? The luxury of choice is also still a choice…

PS. If you want to see more moral machine dilemma’s, have a look at this MIT Moral machine or call us for an extensive presentation on the subject of Moral ethics.