Autonomous vehicles will make practical not moral decisions, finds new research
Autonomous vehicles will never be programmed to solve moral dilemmas but will require a clear and agreed set of rules for decision-making.
That’s the finding of a new report on ‘The Moral Algorithm: How to Set the Moral Compass for Autonomous Vehicles’.
Published today (6 December) by international law firm Gowling WLG on behalf of the UK Autodrive consortium, the research responds to current debates on the so-called ‘Trolley Problem’, which poses the question of whether an autonomous vehicle (AV) faced with a crash situation would make the decision to crash into an elderly person or a child.
In the report, Gowling WLG finds that concerns over the dilemma may have been exaggerated, with most of the experts interviewed agreeing that autonomous vehicles will never be programmed to make such distinctions.
Dr Heiko Schilling, head of navigation, at TomTom, said: “The moral algorithm is a term that is good for tabloid newspapers. There is no such thing in the software that will tell the car to hit the 80-year-old in order not to hit the group of kids.”
The sentiment is echoed by other members of the UK Autodrive consortium, who stress that not only is it technologically impossible to programme an algorithm with an infinite number of moral values, but that it is not something any government would ever sign up to.
However the report says that driverless vehicles will need to be programmed with a clear and agreed set of rules for decision-making including breaking traffic rules.
For example, AVs will need to decide whether to enter an area marked with chevrons and bordered by a solid white line in order to pass a parked car, or break the speed limit to get out of the way of an ambulance.
Kieran Laird, principal associate in the Public Law & Regulation team at Gowling WLG, said: “We can conceptualise a pyramid of moral issues that AVs will need to deal with. Major decisions in relation to human life – such as those outlined in the trolley problem – are at the top, but are relatively rare. Below these are decisions regarding compliance with road traffic rules and, at the bottom of the pyramid but still hugely important, matters of driver behaviour in relation to other road users which will make up the majority of moral decisions that an AV needs to make as part of a journey.”
The report concludes with a series of eight recommendations, including the creation of an independent regulator to balance the legality, safety and commerciality issues surrounding autonomous vehicles, the development of a policy regarding how the moral algorithm will operate in terms of major safety situations and a programme of public education and consultation.
Speaking about the dilemmas that could be posed once cars are required to make complex decisions, Tim Armitage, Arup’s UK Autodrive project director, said: “As with any complex new technology, AVs cannot be specifically programmed to respond to every possible scenario. This simply isn’t practical when a machine is expected to interact with humans, in a complex environment, on a day-to-day basis. AVs will drive to the speed limits and will not be distracted from the task of safe driving; they will make practical decisions based on their programming, but they cannot be expected to make moral decisions around which society provides no agreed guidance.
“To allow AVs to demonstrate their capacity for practical decision-making in complex environments, and to begin to establish public trust through contact, the first step is allowing testing in relatively simple and well-defined environments. Of course, regulation will need to keep up, so it is vital the legal industry act now in order to help create a realistic and viable route to market for AVs.”