Elon Musk, Google’s DeepMind co-founders promise never to make killer robots
Tesla and SpaceX billionaire Elon Musk and all three of the co-founders of Google’s DeepMind are among the thousands of individuals and almost 200 organizations who have publicly committed not to develop, manufacture or use killer robots.
“We the undersigned agree that the decision to take a human life should never be delegated to a machine,” reads the pledge published Wednesday and organized by the Boston nonprofit Future of Life, an organization that researches the benefits and risks of artificial intelligence along with other existential issues related to advancing technology.
“There is a moral component to this position, that we should not allow machines to make life-taking decisions for which others — or nobody — will be culpable. There is also a powerful pragmatic argument: lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilizing for every country and individual,” the pledge says.
So far, 195 organizations and 2633 scientists, engineers, researchers and entrepreneurs have signed the letter, which is a commitment to not engage with or proliferate in any way killer robots, or lethal autonomous weapons. The letter was published Wednesday and announced at the annual International Joint Conference on Artificial Intelligence in Stockholm, Sweden.
“We, the undersigned, call upon governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons. These currently being absent, we opt to hold ourselves to a high standard: we will neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons,” the pledge reads.
Musk has been particularly vocal about the potential dangers of artificial intelligence. For example: “Mark my words, AI is far more dangerous than nukes,” Musk said at the South by Southwest tech conference in Austin, Texas, in March.
DeepMind is an artificial intelligence company which was founded in London in 2010 and acquired by Google in 2014. All three of the company’s co-founders — Demis Hassabis, Shane Legg, and Mustafa Suleyman — are signatories on the pledge.
Google employees have recently petitioned the company’s management to extricate itself from a contract with the United States Department of Defense called Project Maven. The partnership involved Google developing artificial intelligence surveillance to help the military analyze video footage captured by U.S. government drones. “We believe that Google should not be in the business of war,” Google employees wrote in a letter to their boss, CEO Sundar Pichai. In June, Google Cloud chief Diane Greene told employees the company would not renew its contractwith the Department of Defense after it expires in March 2019.
For Wednesday’s pledge published by the Future of Life organization, “lethal autonomous weapons” are defined as those that can “identify, target, and kill a person, without a human ‘in-the-loop,’” according to the nonprofit’s written statement. “That is, no person makes the final decision to authorize lethal force: the decision and authorization about whether or not someone will die is left to the autonomous weapons system,” the statement says.
Meanwhile, “today’s drones” are not included in the Future of Life’s definition of lethal autonomous weapons because drones “are under human control,” the statement says. Further, autonomous machines that defend against other weapons are not included either, the statement says.
“AI has huge potential to help the world — if we stigmatize and prevent its abuse. AI weapons that autonomously decide to kill people are as disgusting and destabilizing as bioweapons, and should be dealt with in the same way,” Max Tegmark, president of the Future of Life Institute and MIT physics professor, said in a statement announcing the pledge.
Wednesday’s letter is not the first public declaration against killer robots coordinated by the Boston-based nonprofit. In August 2017, the Future of Life organized an open letter to the United Nations. Musk, Hassabis and Suleyman signed that letter as well.
“Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend,” the 2017 letter says. “These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”
Although he has not publicly joined either of the pledges, Amazon billionaire boss Jeff Bezos has also recently indicated his fear of autonomous weapons.
“I think autonomous weapons are extremely scary,” said Bezos speaking at the George W. Bush Presidential Center’s Forum on Leadership in April.
The artificial intelligence tech that “we already know and understand are perfectly adequate” to create these kinds of weapons said Bezos, “and these weapons, some of the ideas that people have for these weapons, are in fact very scary.”
- Elon Musk: ‘Mark my words — A.I. is far more dangerous than nukes’
- 3,100 Google employees to CEO Sundar Pichai: ‘Google should not be in the business of war’
- Jeff Bezos on AI: Autonomous weapons are ‘genuinely scary,’ robots won’t put us all out of work
Like this story? Subscribe to CNBC Make It on YouTube!
Source: Read Full Article