© Warwick ppe Society 2018

Can we teach robots ethics?

January 30, 2018

Towards the end of 2017, Google officially got the ball rolling with one of the most anticipated technological developments of the decade, testing driverless cars on city streets. As a student of philosophy, it intrigues me greatly that one the future’s greatest inventions has a lot in common with one of philosophies oldest conundrums. This is known as ‘The Trolley Problem’.

 

For those of you unfamiliar with the problem, here it is:

 

A runaway trolley is heading down the tracks toward five workers who will all be killed if the trolley proceeds on its present course. Sindhu is standing next to a large switch that can divert the trolley onto a different track. The only way to save the lives of the five workers is to divert the trolley onto another track that only has one worker on it. If Sindhu diverts the trolley onto the other track, this one worker will die, but the other five workers will be saved.

 

Most people would choose to pull the lever. The likelihood is that we will value the five lives more than the singular life. But the problem continues:

 

A runaway trolley is heading down the tracks toward five workers who will all be killed if the trolley proceeds on its present course. Sindhu is on a footbridge over the tracks, in between the approaching trolley and the five workers. Next to her on this footbridge is a stranger who happens to be very large. The only way to save the lives of the five workers is to push this stranger off the footbridge and onto the tracks below where his large body will stop the trolley. The stranger will die if Sindhu does this, but the five workers will be saved.

 

Some people will see the pure act of physically pushing someone as different to pulling the lever, not being able to bring themselves to do it: despite the fact that the same value of life is saved at the expense of the same value lost.

 

Now a utilitarian would most likely argue for pushing the man, assuming that this person isn’t of some incredible importance to the world. This for them, would bring the greatest good for the greatest number.

 

Not all kinds of ethicists would reach the same conclusion. Followers of Kant or Aristotle may argue that some values, such as not killing, are more important than the idea of generating the greatest good for the greatest number.

 

Google is starting to face these issues. Driverless cars will be forced to make ethical decisions. If a car is driving down the road, and is about to crash into 5 people, but it could swerve and kill one person, what will it do?

 

This means that Google is going to have to programme ethics into these driverless cars. This throws up a whole host of fascinating questions. What sort of ethics do we apply? Whose values do we prioritise? Would it be that of the driver or of society in general? Most of us would probably subscribe to a set of utilitarian values, after all we tend to maximise our self-interest. The issue with this marginalist thought is what we actually value. Are we going to choose to value old people over young people, might be a question grilling the brightest minds in Silicon Valley right now.

 

Another major question to ask is: Who is going to make these decisions? If the government programmes ethics into AI, this could set a precedent regarding hierarchy in society that leads to great deals of social tension. For example, if it appeared that people of a particular race were valued less than another, this will undoubtedly result in greater racial divide. The same applies if the car chose to swerve into the elderly instead of the young. We could allow the company to decide. However, this would be disastrous, resulting in a set of ethical codes that protected large and effective interest groups. I also do not think a few large corporations should be dictating morality to us, particularly when they are far too often found to be playing by different rules anyway. You only have to look at their attitude towards paying tax to see what I am talking about.

 

So maybe it is then up to us as people, to personally decide what kind of code our car follows. Could we walk into a showroom and ask for a Socratic Sedan? Or a utilitarian Convertible? It is about time that humans tried to generate some form of global ideal moral code in order to lead to greater cohesion amongst society. This has got me thinking as to whether AI could actually improve our form of ethics.

Ron Arkin, a roboethicist at the Georgia Institute for Technology, believes that robots could be more ethical than humans. He cites: ‘Humans don’t suffer from feelings that lead to human irrationality.’ Acts of hatred and fear result in us doing things that are not so beneficial. Eliminating human involvement from moral decisions could remove this form of human error. However, at the same time what concerns me is that AI may not be able to feel a whole host of traits that underpin us as moral beings. For example, compassion and guilt enable us to make a lot of moral decisions because of previous experiences.

 

But I can’t help but think that there is a great deal to gain from intertwining moral questions with the rise of AI and machine learning. After all, our ethics evolved over time, even the most liberal thinkers in the 18th century wouldn’t have opposed slavery, nor would they have seen homosexuality as anything more than a sin. Machine learning, which is an area of computer science where machines independently learn from previous interactions, could develop their ethical code at a speed much faster than most parts of western culture today. If we could imbed the principles of international human law into robots, enabling them to see the importance of the likes of property rights and the rule of law, we could learn from AI, which could easily develop a universal moral code.

 

 But one thing is for sure, when it comes to driverless cars, we are definitely at a crossroads.

Share on Facebook
Share on Twitter
Please reload

Recent Posts

April 18, 2019

Please reload

Archive
Please reload

Search By Tags
Please reload

Follow Us
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square