• 3 minutes e-car sales collapse
  • 6 minutes America Is Exceptional in Its Political Divide
  • 11 minutes Perovskites, a ‘dirt cheap’ alternative to silicon, just got a lot more efficient
  • 13 mins GREEN NEW DEAL = BLIZZARD OF LIES
  • 56 mins How Far Have We Really Gotten With Alternative Energy
  • 3 hours If hydrogen is the answer, you're asking the wrong question
  • 4 days Oil Stocks, Market Direction, Bitcoin, Minerals, Gold, Silver - Technical Trading <--- Chris Vermeulen & Gareth Soloway weigh in
  • 5 days The European Union is exceptional in its political divide. Examples are apparent in Hungary, Slovakia, Sweden, Netherlands, Belarus, Ireland, etc.
  • 16 hours Biden's $2 trillion Plan for Insfrastructure and Jobs
  • 4 days "What’s In Store For Europe In 2023?" By the CIA (aka RFE/RL as a ruse to deceive readers)
Andy Tully

Andy Tully

Andy Tully is a veteran news reporter who is now the news editor for Oilprice.com

More Info

Premium Content

New Safety Feature: A Smart Car Programmed To Let You Die?

The first auto safety device probably was the padded dashboard, unless you count such basics as roofs and windshields. Whatever the case, such features have proliferated to seat belts, air bags, rear cameras and the like.

Now researchers at the University of Alabama, Birmingham (UAB), are studying what may be the ultimate in safety features, one that's also counter-intuitive: The self-driving car that would allow its own occupants to die if its computer determines that their number would be fewer than the people whose lives are threatened in a looming auto accident. Related: Could $12 Trillion Trigger A Renewables Revolution?

“Ultimately this problem devolves into a choice between utilitarianism and deontology,” – the ethical principal that “some values are simply categorically always true,” UAB alumnus Ameen Barghi, a bioethicist, tells the school's news department.

Let's step back for a moment and look at a dilemma that highlights this ethical problem. Classically it's known as the Trolley Problem: An employee in charge of a switch on a trolley track knows a train is due to pass by soon, but suddenly notices that a school bus full of children is stalled on that track. A look at the alternate route shows the employee's young child has somehow crawled onto that track.
His choice is either to save his child, or save the many children on the bus. Which is right?

Now shift this dilemma to a highway of the not-too-distant future. It is crowded with cars, many of them self-driving vehicles. Google, which already has been experimenting with such autos, says its cars can ably handle the risks of the road, and boasts that any accidents involving its cars have been caused by human error, not programming glitches. Related: Why Buffett Bet A Billion On Solar

So here's another example of the dilemma involving not trolleys but cars: A tire suddenly blows out on a self-driving vehicle, and the auto's computer must now decide whether to allow the car to careen into oncoming traffic or deliberately steer the car into a retaining wall. Does it base its choice on the benefit of its occupants, or the benefit of others who may outnumber them?

Here's how Barghi breaks it down: “Utilitarianism tells us that we should always do what will produce the greatest happiness for the greatest number of people,” he told the UAB news department. In this scenario, then, the car should be programmed to ram into the retaining wall, endangering its occupants but sparing others on the highway.

But then there's deontology, which we might call ethical absolutism. “For example, [deontology dictates that] murder is always wrong, and we should never do it,” Barghi says. In the Trolley Problem, deontology says that “even if shifting the trolley will save five lives, we shouldn’t do it because we would be actively killing one.” Related: U.S. Oil Glut An EIA Invention?

As a result, he said, a company that follows deontology shouldn't program self-driving cars to save others while sacrificing the life of its occupants.

There's no word how Barghi stands on the dilemma of the self-driving car or the Trolley Problem. The UAB graduate, who will enroll in Britain's Oxford University in the autumn as a Rhodes Scholar, seems more interested in studying and debating such predicaments than in solving them. He served as a senior leader on UAB's team in the Bioethics Bowl in April at Florida State University. His team won this year's national championship.

But here's a hint: In last year's Bioethics Bowl, Barghi's team also competed, arguing a related case, whether governments would be justified in banning human driving altogether if self-driving cars proved to be significantly safer than cars with human drivers. Barghi's team argued in favor of self-driving cars.

ADVERTISEMENT

By Andy Tully Of Oilprice.com

More Top Reads From Oilprice.com:


Download The Free Oilprice App Today

Back to homepage





Leave a comment
  • aed939 on June 28 2015 said:
    In order to gain the confidence of consumers in robots, self-driving cars must be completely loyal to its owners. It must make decisions that are consistent with the owners' own preferences, and that usually means that they value the lives of their children over stranger children.

Leave a comment




EXXON Mobil -0.35
Open57.81 Trading Vol.6.96M Previous Vol.241.7B
BUY 57.15
Sell 57.00
Oilprice - The No. 1 Source for Oil & Energy News