The first auto safety device probably was the padded dashboard, unless you count such basics as roofs and windshields. Whatever the case, such features have proliferated to seat belts, air bags, rear cameras and the like.
Now researchers at the University of Alabama, Birmingham (UAB), are studying what may be the ultimate in safety features, one that's also counter-intuitive: The self-driving car that would allow its own occupants to die if its computer determines that their number would be fewer than the people whose lives are threatened in a looming auto accident. Related: Could $12 Trillion Trigger A Renewables Revolution?
“Ultimately this problem devolves into a choice between utilitarianism and deontology,” – the ethical principal that “some values are simply categorically always true,” UAB alumnus Ameen Barghi, a bioethicist, tells the school's news department.
Let's step back for a moment and look at a dilemma that highlights this ethical problem. Classically it's known as the Trolley Problem: An employee in charge of a switch on a trolley track knows a train is due to pass by soon, but suddenly notices that a school bus full of children is stalled on that track. A look at the alternate route shows the employee's young child has somehow crawled onto that track.
His choice is either to save his child, or save the many children on the bus. Which is right?
Now shift this dilemma to a highway of the not-too-distant future. It is crowded with cars, many of them self-driving vehicles. Google, which already has been experimenting with such autos, says its cars can ably handle the risks of the road, and boasts that any accidents involving its cars have been caused by human error, not programming glitches. Related: Why Buffett Bet A Billion On Solar
So here's another example of the dilemma involving not trolleys but cars: A tire suddenly blows out on a self-driving vehicle, and the auto's computer must now decide whether to allow the car to careen into oncoming traffic or deliberately steer the car into a retaining wall. Does it base its choice on the benefit of its occupants, or the benefit of others who may outnumber them?
Here's how Barghi breaks it down: “Utilitarianism tells us that we should always do what will produce the greatest happiness for the greatest number of people,” he told the UAB news department. In this scenario, then, the car should be programmed to ram into the retaining wall, endangering its occupants but sparing others on the highway.
But then there's deontology, which we might call ethical absolutism. “For example, [deontology dictates that] murder is always wrong, and we should never do it,” Barghi says. In the Trolley Problem, deontology says that “even if shifting the trolley will save five lives, we shouldn’t do it because we would be actively killing one.” Related: U.S. Oil Glut An EIA Invention?
As a result, he said, a company that follows deontology shouldn't program self-driving cars to save others while sacrificing the life of its occupants.
There's no word how Barghi stands on the dilemma of the self-driving car or the Trolley Problem. The UAB graduate, who will enroll in Britain's Oxford University in the autumn as a Rhodes Scholar, seems more interested in studying and debating such predicaments than in solving them. He served as a senior leader on UAB's team in the Bioethics Bowl in April at Florida State University. His team won this year's national championship.
But here's a hint: In last year's Bioethics Bowl, Barghi's team also competed, arguing a related case, whether governments would be justified in banning human driving altogether if self-driving cars proved to be significantly safer than cars with human drivers. Barghi's team argued in favor of self-driving cars.
By Andy Tully Of Oilprice.com
More Top Reads From Oilprice.com:
- Huge Increase In Quakes A Game-Changer For Oklahoma Oil & Gas
- The Growing Sino-Latin Energy Relationship
- What Would A Saudi-Russian Partnership Mean For World Energy?