Global convention on ethical AI

Call for Global Convention on Ethical AI

Whilst UNI Global Union fully supports that ethical considerations are being tabled, it is not good enough that they take place in exclusive, disconnected business circles or in closed realms

What will happen the day robots become smarter than humans?
And are we ready for it?

Sam Harris asks these two fundamental questions in this mindboggling ted talk. He responds with a scenario:

“Just think about how we relate to ants. We don't hate them. We don't go out of our way to harm them. In fact, sometimes we take pains not to harm them. We step over them on the sidewalk. But whenever their presence seriously conflicts with one of our goals, let's say when constructing a building like this one, we annihilate them without a qualm. The concern is that we will one day build machines that, whether they're conscious or not, could treat us with similar disregard.”

“AI will decimate middle class jobs, worsen inequality and risk significant political upheaval.”

Stephen Hawking

Harris is not alone in raising concerns about the limits and boundaries of artificial intelligence. Indeed, the world-famous Professor Stephen Hawking already in 2014 warned that “the development of full artificial intelligence could spell the end of the human race.” He later continued:  “AI will decimate middle class jobs, worsen inequality and risk significant political upheaval.” He concluded: “We have the means to destroy our world, but not escape it.”

Other academics and experts, like the two AI professors Evers

and Pantic who addressed UNIs Leadership Summit, disagree in the destructive potentials of AI and machine learning, and caution that we are years, if not decades, away from a machine intelligence that outplays humans (Tech Crunch 2016, the Telegraph 2016). They see machine learning and AI as intelligence forms that will benefit humans and point to the many areas where AI already is benefitting humans, not least in the field of healthcare.

Eric Schmidt, the Executive Chairman of Google’s Alphabet, said:

“Imagine a world where clever apps and devices could help us recognize every person we’ve ever met, recall anything we’ve ever said, and experience any moment we’ve ever missed. A world where we could in effect speak every language. (We already see glimmers of this today with Google Translate.) Sophisticated AI-powered tools will empower us to better learn from the experiences of others, and to pass more of our learnings on to our children.”

Despite these two fundamentally different opinions on how AI will affect our societies, both groups believe AI is here to stay and both agree that it will significantly change our labour markets, skill requirements and jobs. In addition, a growing number of disconnected groups and companies are beginning to focus on the core issues of the ethics of AI machine learning (www.futureoflife.org, World Economic Forum 2016, New York Times 2016).

Google’s Eric Schmidt has created three principles for AI to secure it positively benefits humans (Time 2015):

  1. AI should benefit the many, not the few. AI should aim for the common good.
  2. AI research and development should be open, responsible and socially engaged
  3. Those who design AI should establish best practices to avoid undesirable out
    comes. There should be verification systems that evaluate whether an AI system is doing what it was built to do.

Academics Diakopoulos and Friedler argue that AI accountability can be analysed through the lens of five core principles: responsibility, explainability, accuracy, auditability, and fairness. And, although criticised for being out-of-date and not suitable for super-intelligent AI, Isaac Asimov's 1942 "Three Laws of Robotics" provide yet another set of rules, namely: 

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

“UNI Global Union calls for the establishment of a global convention on the ethical use, development and deployment of artificial intelligence, algorithms and big data.”

Philip Jennings, UNI Global Union

Multi stakeholder convention

Whilst UNI Global Union fully supports that these ethical considerations are being tabled, it is not good enough that they take place in exclusive, disconnected business circles or in the closed realms of academia. The global nature of the digital economy, its global implications for workers and citizens alike, require a global solution.

UNI Global Union calls therefore for the establishment of a global convention on the ethical use, development and deployment of artificial intelligence, algorithms and big data. Academics, businesses, trade unions, consumers, governments and civil society organisations should unite to establish these global standards so not only a just transition to the future world of work and technology is guaranteed but also so future technology is applied in the interest of humans, in a just, fair, transparent and sustainable way.