More reasons to worry about AI in war and peace

More reasons to worry about AI in war and peace

UNI joins academics and industry leaders’ appeal for a responsible approach to AI.

More than 100 academics and industry leaders are urging the United Nations to ban lethal autonomous weapons.

Tesla and Space X CEO Elon Musk, Google DeepMind co-founder Mustafa Suleyman, and 114 others signed the open letter to the UN requesting the ban on killer artificial intelligence, published earlier this week.

Lethal autonomous weapons threaten to become the third revolution in warfare,” the letter states. “Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.

We do not have long to act. Once this Pandora’s box is opened, it will be hard to close

- signatories to the Open Letter

We do not have long to act. Once this Pandora’s box is opened, it will be hard to close,” it states, concluding with an urgent plea for the UN “to find a way to protect us all from these dangers.


The capability for the autonomous weapons has moved from science fiction to reality over the past 15 years, and UNI is joining this call for a sane approach to AI.
Philip Jennings, General Secretary of UNI Global Union, said:

“We must fully consider the real-world consequences of artificial intelligence. Going forward, our deployment of this technology should not be solely determined by what is possible, but what is ethical and what makes our world more just. The use of autonomous killing machines does not meet either of those criteria and is better left on the silver screen than put on our streets.”

UNI has repeatedly called for the ethical use and development of AI, and the operational requirements and demands for ethical AI will be discussed at our Leadership Summit on October 9.