Moral machines raise killer questions

Northrop Grumman X47-B (Copyright: Northrop Grumman)Northrop Grumman
X47-B (Copyright: Northrop Grumman)

Who should take responsibility for decisions made by “intelligent” machines like killer drones or autonomous cars?

If you’re looking for a headline to capture our technological times, this recent one might fit the bill: “The DIY kid-tracking drone.”

To help keep a close eye on his child, American parent Paul Wallich adapted a quadcopter to follow a GPS chip in his son’s backpack to the bus stop every morning on the way to school: an ingenious outsourcing of parental responsibility, and a formidable piece of hacking to boot. What, though, does this say about the increasing delegation not only of daily tasks to machines, but of potentially life-changing decisions themselves?

Take a life and death dilemma that could soon be science fact rather than fiction. As psychology professor Gary Marcus pointed out in a recent piece for The New Yorker, driverless cars are now street-legal in three American states and could soon be cruising into a garage near you. But if your driverless car were hypothetically to encounter a bus full of schoolchildren on the wrong side of the road, how should it react: should it make a decision to swerve, preventing a collision but putting your life at risk; or should it collide with the bus if there’s a greater chance of you surviving? “If the decision must be made in milliseconds,” Marcus reasons, “the computer will have to make the call.”

Cars are an interesting case study because they already represent one of the most hazardous technological environments most of us enter on a regular basis – and because the belief that technology should make this environment as safe as possible has been accepted, and been saving lives, for decades. As a famous adage of product design puts it, once you’ve built the car, your task is not simply to hope that you can prevent all accidents, but to design a better car crash – that is, to make those inevitable occasions on which things go wrong as unlikely as possible to cause fatal harm.

Minimising harm is a clear enough good. What happens, though, to ethical issues in design when the product you’re creating is itself going to be making decisions?

Killing machine

Marcus’ driverless car scenario riffs off the famous “trolley problem”: a thought experiment that asks subjects to decide between pushing one man off a bridge in order to save the lives of several others trapped in the path of a runaway railway trolley, or allowing that one man to survive while the others meet their doom. Most respondents are unwilling physically to push another human being to their death even if it would save multiple lives – one of the factors the test is designed to measure. A machine’s response, though, would depend entirely upon the decision-making model encoded by its creators. One robot might refuse to act; one might save the many; one might not even recognize a choice exists. It’s up to us to decide which program to write. But once we’ve done it and set things in motion, it’s up to them.

Programming this into cars is one thing. Weapons are quite another – and represent perhaps the most urgent testing-ground for machine ethics today. Early this December, Iran boasted that it had captured a US ScanEagle drone over its airspace; one of the most basic models of the thousands of unmanned aerial vehicles that have become the standard stuff of warfare over the last decade.

Like Paul Wallich’s child-minding quadcopter, most drones are more like sophisticated remote-controlled aircraft than autonomous decision-makers. Late November, though, saw some of the first tests of what some news outlets colourfully labelled a “killer robot”: a US Navy stealth drone piloted by artificial intelligence. Snappily named the X-47B Unmanned Combat Air System, the 19m- (62ft-) wide aircraft is designed to take split-second decisions on its own initiative – and even land itself on an aircraft carrier – while remaining under the overall control of human operators.

As much as anything, it’s the relationship between these human operators and their subject that’s most disturbing. Thought experiments like the trolley problem demonstrate something self-evident but extremely significant in moral thinking: how our sense of obligation is modified by distance and immediacy. We feel differently about pushing someone off a bridge with our own hands than we do about pushing a button to achieve the same result. Weapons are real-world examples of this: the arrow differs from the fist, the rifle from the arrow, the bomb from the gun. And by the time you reach autonomous systems, many of the “actions” setting them in motion will have taken place months or even years ago: as part of a process of programming and design whose eventual consequences may be entirely unknowable.

If all this sounds rather removed from real life, it’s worth remembering that the cutting edge of today’s battlefield will represent little more than the state-of-the-art in domestic appliances a decade from now. Such is the speed of technological change, and the steady migration of its moral conundrums towards the mass market.

It’s a field in which the ghostly outlines of future controversy are already clearly visible. If a drone can oversee your child’s journey to school today, will it be driving you both to work tomorrow – and should you insist on this, if the machine is demonstrably a safer driver than you? If robot soldiers can fight for your country without risking your sons’ and daughters’ lives, should you gratefully embrace automated warfare?

These are new questions, but answering them means grappling with ancient ethical issues about responsibility. What does it mean to hold someone responsible not only for their actions, but for the chains of consequence they set snaking through the world? And what does it mean to exercise human ingenuity responsibly, given the ever-expanding scope of our powers?

Ironically, the best contemporary answer may mean giving up some of that power – and asking what it means to create machines that can take responsibility for themselves.

Do you agree with Tom? If you would like to comment on this article or anything else you have seen on Future, head over to our Facebook page or message us on Twitter.