Tech leader Elon Musk is expanding his influence to include crafting just war policy in a new letter to the United Nations calling for laws to ban the creation and use of killer robots during conflicts.
Musk—one of the major forces behind the artificial intelligence push for self-driving cars—says AI poses a bigger threat than nuclear North Korea, and the development of autonomous weapons “threatens to become the third revolution in warfare”.
Musk isn’t the only tech leader. The letter was co-signed by 115 other experts who believe deadly autonomous weapons will unleash a scale of violence seen only with the use of chemical or biological weapons.
“Lethal autonomous weapons threaten to become the third revolution in warfare,” the letter, released to the public on Monday, said. “Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.”
The experts hail from foremost technology companies and the world’s global robotics and artificial intelligence (AI) communities. Scientists from countries including China, Israel, Russia, and Britain addressed the letter to the United Nations Convention on Certain Conventional Weapons, which specializes in containing the spread of devices “considered to cause unnecessary or unjustifiable suffering to combatants or to affect civilians indiscriminately.”
Government entities are notorious for falling behind the tech world in creating protective policies.
Izumi Nakamitsu, the U.N. head for Disarmament Affairs, said the lag is particularly dire in the killer robot field.
“There are currently no multilateral standards or regulations covering military AI applications,” Nakamitsu wrote. “Without wanting to sound alarmist, there is a very real danger that without prompt action, technological innovation will outpace civilian oversight in this space.” Related: Is OPEC Throwing In The Towel On U.S. Market Share?
So far, laws to control robotic consciousness have most famously been devised in sci-fi writer Isaac Asimov’s short story “Runaround.” The three laws, ingrained in the fictional “Handbook of Robotics, 56th Edition, 2058 A.D.,” declare as follows:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Moving forward, countless sci-fi writers utilized the three laws in their own works to elegantly explain their robot characters’ code of ethics. By the time Asimov’s cannon had robots in control of entire planets, a zeroth law had come into play as well: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”
Although popular in the world of fiction, it would be foolish for the U.N. to adopt these rules to control AI. As pointed out in Elijah Baley’s book The Naked Sun, loopholes abound within this system, in which criminal masterminds or warlords could use a robot’s obedience as a handy tool to commit violence. Related: EIA Spreads Optimism With Double Draw
What happens when a robot is asked to add a substance to a drink and serve it to someone, when, unbeknownst to the robot, the special ingredient is poison? The device will continue its work as commanded because of its ignorance of the commanding human’s intent. Thinking bigger, a network of thousands of robots, each completing a separate task that partially, but not directly or completely, contributes to a devastating attack on a civilization, would not be thwarted by AI developed by the aforementioned laws. The robots simply do not have enough information to stop themselves.
In any case, the real world is almost always more complicated than can be depicted in a sci-fi novel. New laws regarding autonomous cars and other brained technology pass often, but the policy legwork to prevent the weaponization of the cutting edge is still caught in the Middle Ages.
Time to follow our thought leaders to bring our AI laws into the future.
By Zainab Calcuttawala for Oilprice.com
More Top Reads From Oilprice.com:
- The Caribbean Is Poised To Become The Next Major Oil Region
- Shale Drillers Head North As The Permian Fills Up
- Two Countries Could Push Oil Over $50