Is Artificial Intelligence a Threat to Human Rights?

Could artificial intelligence (AI) annihilate human rights? Such a scenario is less dystopian than one might think. Indeed, many experts from the academic, political and economic spheres have underlined the dangerous effects of machine-learning algorithms1-2. For instance, in November 2017, a few months before his death, Stephen Hawking, the world-renowned cosmologist, said that AI could replace humanity with a new form of life. Therefore, according to him, humans must control the evolution of technology to avoid extinction. In this regard, Hawking argued for aligning the values of AI with human values3. It is important to stress that the scenario of human extinction remains hypothetical. This article, however, is aimed at demonstrating that the current AI technology is already threatening human rights.

English

Andrea Danti - Shutterstock

I - What is artificial intelligence?

While the exact definition of artificial intelligence (AI) remains a subject of extensive discussion and debate within the scientific community, it is nevertheless possible to outline a general definition. AI, as opposed to human intelligence, refers to the work tasks performed by machines that would require human intelligence if they were performed by humans4. There are two types of AI. On one hand, there is weak AI, which is characterised by computational processes intended to simulate human intelligence. On the other hand, there is strong AI, which refers to computational processes within a machine that can learn according to past experiences. Thus, strong AI seeks to understand the environment in which it evolves. Accordingly, such intelligence can become autonomous and improve its behavioural pattern5.

The scientific debate is ongoing as to the potential emergence of an AI that could exceed human intelligence6. In this regard, the director of Oxford University’s Future of Humanity Institute, Nick Bostrom, defines superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest"7. Bostrom strongly believes in the advent of superintelligence. Yet he insists that superintelligent machines will not necessarily understand our common sense or the meaning of human dignity8.

 

II - What types of human rights are affected by artificial intelligence?

There are many schools of thought regarding the definition and origins of human rights9. However, it is important to mention two types of international human rights on which AI has an impact.

First, there are the economic and social rights mentioned in the 1966 International Covenant on Economic, Social and Cultural Rights. These rights include the right to work, the right to social security, the right to an adequate standard of living and the right to have access to education and health10. AI might disrupt the labour market, and such a disruption would affect the right to work11. In a similar vein, the increasing use of IA-based software in public administration could undermine the access to social security12. In this respect, as Chapter III shows, some algorithms tend to give more weight to the efficiency of social aid rather than the preservation of well-being.

Second, there are the civil and political rights mentioned in the International Covenant on Civil and Political Rights of 1966 and in the European Convention on Human Rights. Civil and political rights include non-discrimination, the right to life, the prohibition of torture and freedom of opinion and expression13-14. The use of autonomous killer robots could call into question the right to life15. Furthermore, as Chapter VI highlights, computer programs using machine-learning algorithms tend to discriminate against minorities16. This would affect the principles of equality and non-discrimination.

 

III - Artificial intelligence might increase socio-economic inequalities

Technological innovations affect the labour market because they generate the displacement effect, which refers to the replacement of human tasks with tasks performed by new technologies17. According to economists Murnane and Levy, from a short-term perspective, non-routine tasks requiring a high degree of intellectual reasoning are hardly replaceable. Nevertheless, these economists observe that the displacement effect is already taking place in many areas of work. Indeed, jobs requiring manual skills are on the decline; meanwhile, the number of jobs requiring digital skills is growing significantly18. According to the McKinsey Global Institute, the advent of AI could have a 3,000-times greater impact on the economic structure of society than the introduction of steam power did in the 18th century. According to the institute, the rise of AI will force 375 million people to change jobs by 203019. Hence, a significant proportion of the world population may end up unemployed if no retraining policy is implemented.

Beyond the risk of massive unemployment, the use of AI presents another socio-economic risk. As political science professor Virginia Eubanks argues, AI could have a drastically negative effect on public services. She argues that the use of AI software programs to optimise the allocation of social aid can lead to dangerous biases, which may increase socio-economic inequalities and depoliticise analyses of the causes of poverty20.

Indeed, she stresses that certain algorithms tend to give more weight to the efficiency and economy of public finances rather than to the welfare of the population. This might result in a society in which a significant part of the population would not receive social assistance because AI may consider that providing aid to these individuals is inefficient21.

Starting from the observation that AI does not make the same ethical considerations that humans do, researchers from the Future of Humanity Institute hypothesise that such intelligence could favour the concentration of economic resources if programmers do not include moral criteria, such as the universality of allocation and magnanimity, in their algorithms22.

 

IV - Artificial intelligence could harm humans

Could AI decide to harm humans? The answer to this question depends on the moral attributes of AI. On this point, Mathias Risse, professor of philosophy and public policy at Harvard University, states that it is necessary to align the values ​​of intelligent machines with those of humans23. He reminds us of the code of conduct developed by Isaac Asimov in his novel, Runaround. It comprises three fundamental laws: 24-25

(1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
(2) A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
(3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The purpose of these laws is to protect humans from the hypothetical scenario of an AI takeover. However, in Professor Risse’s view, these laws are insufficient and outdated. For that reason, he prefers the 23 Asilomar principles, elaborated by the Future of Life Institute, a research organisation bringing together researchers in the field of AI26.

Among these, there are principles aimed at preventing a disaster scenario. The first one is the principle of alignment with human values, ​​such as human dignity, fundamental rights, freedom and cultural diversity27. The second one is the principle of transparency (intelligibility) of the decision-making process. Assuming that such a principle is respected, human auditors ought to be able to understand and deconstruct the intellectual reasoning of the machine. In other words, one should be able to understand why a given AI system would decide to harm individuals or groups28.

There is a risk of machine calculations becoming so fast and complex that their reasoning will become too opaque and abstract for the human brain to comprehend. An incomprehensible decision would render human ethics obsolete because the machine’s decision would probably be the result of what might be called a complexity bias – that is, a type of intellectual reasoning that does not respect the necessary simplicity of human reasoning and morality29.

The picture above shows an "Artificial" Vitruvian Man. / Tayeb Mezahdia – Pexels.

 

V- Killer robots could pose a danger to humanity

Lethal autonomous weapon systems (LAWS) can be described as armed robots with AI30. They could target and kill individuals on the basis of their algorithm and not the command of a superior31. According to Stuart Russell, a professor of computer science, the actions of such robots would be inconsistent with international humanitarian law because they would not respect certain imperative principles, such as proportionality and necessity. In other words, there is a risk that an autonomous weapon would act disproportionately or unnecessarily against human targets32.

In addition, Professor Russell believes that the construction of autonomous lethal weapons would enable the creation of giant armies that might undermine global human security. Indeed, the professor stresses that a simple cargo plane can transport up to three million autonomous weapons33. He adds that coordinating an attack of one million autonomous robots would require the supervision of just five human operators. In light of this scenario, Stuart Russell concludes that LAWS can be deployed as weapons of mass destruction34.

According to several experts working in the arms industry, the trend towards the automation of weapons is inevitable. This observation leads robotics expert Noel Sharkey to defend the obligation of human supervision of LAWS. According to him, humans ought to be able to limit the actions of robots if these machines do not respect international humanitarian law35.

Nevertheless, according to many experts in the fields of human rights and AI, the only solution that will prevent a disaster is a ban on the production of LAWS. In this respect, a coalition of non-governmental organisations met in April 2013 to launch a campaign to stop killer robots36. Later, in 2015, around 1,000 experts on AI met at the International Joint Conference on Artificial Intelligence (IJCAI) in Buenos Aires to sign a letter calling for the prohibition of autonomous offensive weapons. The letter argues that the creation of such weapons would lead the planet to an arms race, the outcome of which could be tragic for humanity. The letter was signed by several personalities in the fields of science and new technologies, including Elon Musk, Steve Wozniak and Stephen Hawking. 37-38.

In the future, LAWS will become smaller and less expensive. The picture above shows an example of Micro Autonomous Systems and Technology, or MAST. / Jhi Scott - U.S. Army.

 

VI - Artificial intelligence could reinforce discrimination

According to recent research, facial recognition programs using machine-learning algorithms might exacerbate gender and ethnic discrimination39. One hypothesis is that the lack of diversity within AI-related professions tends to lead to the creation of biased algorithms that discriminate against certain minorities40.

For instance, in May 2016, non-profit journalism organisation ProPublica highlighted the racial discrimination caused by COMPAS, an algorithm used in the United States to predict the likelihood of criminal recidivism. According to ProPublica, the predictions of AI greatly overestimate the risk of recidivism of African Americans. Indeed, by comparing the software predictions and official figures two years after the release of African American detainees, it is possible to see that the algorithm only successfully predicts recidivism in 61 per cent of cases41. The fundamental problem is that American judges are increasingly using the predictions stemming from such software programs42.

In May 2013, Eric L. Loomis was punished for eluding police and driving a car without the consent of its owner. The judge presiding over the case used the results of COMPAS to justify a seven-year sentence. The software program had determined that Loomis had a high risk of recidivism, and the judge presented this as proof based on evidence to justify this lengthy sentence43.

According to U.S. civil liberties organisation Electronic Privacy Information Center, the courts use criminal risk prediction software programs in most U.S. states. Sometimes, the results are used merely as an indication; other times, they directly contribute to the judge’s sentencing. The problem is that these software programs belong to private companies; therefore, the defendants cannot question the results because they cannot access the data. Indeed, it is important to note that in the United States, the only data accessible to defendants are those used or developed by public institutions44.

In response to the risks of discrimination that programs using machine-learning algorithms present, a coalition of non-governmental organisations and companies in the new technology sector met in May 2018 to draft the Toronto Declaration in an effort to force developers to prevent bias in their algorithms45.

 

VII - Necessary collaboration between the human rights and artificial intelligence ​​communities

As mentioned above, the development of AI is inevitable. Therefore, how can human rights be protected from such intelligence? There are two solutions. First, it is necessary to analyse and correct the biases in certain algorithms so that they do not cause inequality, death and discrimination. Second, it is essential that more human rights specialists work in the field of AI research. After all, as Mathias Risse states, the future will be the result of the work of AI experts, but whether the world of tomorrow will be the result of work done by human rights defenders remains to be seen46.

Professor Risse claims that a significant part of the AI community thinks that human rights are a form of ‘ethical imperialism’47. Moreover, he observes that some experts in the field of machine learning believe that human rights represent old thought patterns. According to such experts, ethics stems from rationality. Therefore, according to this view, a machine would make more ethical decisions than a human would because AI would have access to more data. However, such arguments overlook the complexity biases of algorithms. Therefore, it is clear that human rights are in danger if programmers do not include human values ​​in their programs. Amnesty International has recognised the importance of guiding the development of AI. The organisation has launched a project aimed at defending human rights from AI48. Since 2017, Amnesty is conducting research to analyse discrimination bias, particularly in the area of ​​criminal justice49.

By Alvaro Candia Callejas

 

Recommended resources

BOSTROM, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press, 2014, 352 pp. 

BRUNDAGE, Miles. Et al. “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation”, Future of Humanity Institute, University of Oxford, 2018, 101 pp.

EUBANKS, Virginia. Automating Inequality: How High-Tech Tools Profile, Police and Punish the Poor. New York: St. Martin’s Press, 2018, 272 pp.

RISSE, Mathias. “Human Rights and Artificial Intelligence: An Urgently Needed Agenda”, Faculty Research Working Paper Series, Harvard Kennedy School, No. RWP18-015, 2018, 18 pp.

 

References

1. CLIFFORD, Catherine. “Marc Cuban on dangers of A.I.: If you don’t think Terminator is coming ‘you’re crazy”, CNBC, 25.07.2018. Available online: https://www.cnbc.com/2018/07/25/mark-cuban-dangers-of-ai-terminator-is-coming.html.

2. BRUNDAGE, Miles. Et al. “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation”, Future of Humanity Institute, University of Oxford, 2018, p.13.

3. SULLEYMAN, Aatif. “Stephen Hawking warns artificial intelligence ‘may replace humans altogether’”, The Independent, 02.11.2017. Available online: https://www.independent.co.uk/life-style/gadgets-and-tech/news/stephen-hawking-artificial-intelligence-fears-ai-will-replace-humans-virus-life-a8034341.html.

4. BRINGSJORD, Selmer. GOVINDARAJULU, Naveen Sundar. “Artificial Intelligence”, The Stanford Encyclopedia of Philosophy. Available online: https://plato.stanford.edu/entries/artificial-intelligence/#WhatExacAI.

5. PEREZ, Javier Andreu. Et al. “Artificial Intelligence and Robotics”, UK-RAS Network, 2018, p. 6.

6. GRACE, Katja. Et al. “When Will AI Exceed Human Performance? Evidence from AI experts”, Future of Humanity Institute, University of Oxford, 2017, p. 3.

7. BOSTROM, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press, 2014, p. 40.

8. Ibid, p. 30.

9. DEMBOUR, Marie-Bénédicte. “What Are Human Rights? Four Schools of Thought”, Human Rights Quarterly, Vol. 32, No. 1, 2010, pp. 3-4.

10. UN GENERAL ASSEMBLY. International Covenant on Economic, Social and Cultural Right, New York, 16.12.1966.

11. PETROPOULOS, Georgios. “The Impact of Artificial Intelligence on Employment” in Max NEUFEIND (Ed). Et al. Work in the Digital Age: Challenges of the Fourth Industrial Revolution. Lanham: Rowman & Littlefield, 2018, p. 124.

12. EUBANKS, Virginia. Automating Inequality: How High-Tech Tools Profile, Police and Punish the Poor. New York: St. Martin’s Press, 2018, pp.10-13.

13. UN GENERAL ASSEMBLY. International Covenant on Civil and Political Rights, New York, 19.12.1966.

14. COUNCIL OF EUROPE. European Convention on Human Rights, Rome, 4.11.1950.

15. RUSSELL, Stuart. “AI and Lethal Autonomous Weapons”. Group of Governmental Experts on Lethal Autonomous Weapons Systems, United Nations Office at Geneva, 2017, p. 2.

16. OSOBA, Osonde. WELSER IV, William. An Intelligence in Our Image: The Risks of Bias and Errors in Artificial Intelligence. Santa Monica: Rand Corporation, 2017, p. 13.

17. LEVY, Frank. MURNANE, Richard. “Dancing with Robots: Human Skills for Computerized Work”, NEXT Report, 2013, available online: http://content.thirdway.org/publications/714/Dancing-With-Robots.pdf.

18. Loc. cit.

19. MANYIKA, James. Et al. “Jobs lost, jobs gained: What the future of work will mean for jobs, skills, and wages”, McKinsey Global Institute, 2017, available online: https://www.mckinsey.com/featured-insights/future-of-organizations-and-work/Jobs-lost-jobs-gained-what-the-future-of-work-will-mean-for-jobs-skills-and-wages.

20. EUBANKS, Virginia. Automating Inequality: How High-Tech Tools Profile, Police and Punish the Poor. New York: St. Martin’s Press, 2018, pp.10-13.

21. Ibid, pp. 195-198.

22. BOSTROM, Nick. DAFOE, Allan. FLYNN, Carrick. “Public Policy and Superintelligent AI: A vector Field Approach” in S. Matthew LIAO (Ed). Ethics of Artificial Intelligence. Oxford: Oxford University Press, 2019 (forthcoming), p. 14. Available online: https://nickbostrom.com/papers/aipolicy.pdf.

23. RISSE, Mathias. “Human Rights and Artificial Intelligence: An Urgently Needed Agenda”, Faculty Research Working Paper Series, Harvard Kennedy School, No. RWP18-015, 2018, p. 8.

24. Ibid, p. 9.

25. BARTHELMESS, Ulrike. FURBACH, Ulrich. “Do we need Asimov’s Laws?”, MIT Technology Review, 16.05.2014. Available online: https://www.technologyreview.com/s/527336/do-we-need-asimovs-laws/.

26. RISSE, Mathias. Op cit, p. 9.

27. Loc. cit.

28. Loc. cit.

29. Loc. cit.

30. CROOTOF, Rebecca. “The Killer Robots are Here: Legal and Policy Implications”, Cardozo Law Review, Vo. 36:1837, 2015, p. 1851.

31. Ibid, p. 1862.

32. RUSSELL, Stuart. “AI and Lethal Autonomous Weapons”. Group of Governmental Experts on Lethal Autonomous Weapons Systems, United Nations Office at Geneva, 2017, p. 1.

33. Ibid, p. 2.

34. Ibid, p. 2.

35. SHARKLEY, Noel. “Towards a principle for the human supervisory control of robot weapons”, Politica & Società, Vol. 2, 2014, p. 316.

36. CHARPENTER, Charli. “Beware the Killer Robots: Inside the Debate over Autonomous Weapons”, Foreign Affairs, 03.07.2013. Available online: https://www.foreignaffairs.com/articles/united-states/2013-07-03/beware-killer-robots.

37. FUTURE OF LIFE INSTITUTE. “Autonomous Weapons: An Open Letter from AI & Robotics Researcher”, Futureoflife.org, 28.07.2015. Available online: https://futureoflife.org/open-letter-autonomous-weapons/.

38. GIBBS, Samuel. “Elon Musk leads 116 experts calling for outright ban of killer robots”, The Guardian, 20.08.2017. Available online: https://www.theguardian.com/technology/2017/aug/20/elon-musk-killer-robots-experts-outright-ban-lethal-autonomous-weapons-war.

39. LOHR, Steve. “Facial Recognition is Accurate, if You’re a White Guy”, The New York Times, 09.02.2018. Available online: https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html.

40. RAM, Aliya. “AI risks replicating tech’s ethnic minority bias across business”, Financial Times, 31.05.2018. Available online: https://www.ft.com/content/d61e8ff2-48a1-11e8-8c77-ff51caedcde6.

41. ANGWIN, Julia. LARSON, Jeff. KIRCHNER, Lauren. Machine Bias: There’s software used across the country to predict future criminals. And it’s biased against blacks”, ProPublica, 23.05.2015. Available online: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

42. HAMILTON, Melissa. “We use big data to sentence criminals. But can the algorithms really tell us what we need to know?”, The Conversation, 06.06.2017. Available online: https://theconversation.com/we-use-big-data-to-sentence-criminals-but-can-the-algorithms-really-tell-us-what-we-need-to-know-77931.

43. Loc. cit.

44. ELECTRONIC PRIVACY INFORMATION CENTER. “Algorithms in the Criminal Justice System”, Epic.org. Available online: https://epic.org/algorithmic-transparency/crim-justice/.

45. HUMAN RIGHTS WATCH. “The Toronto Declaration: Protecting the rights to equality and non-discrimination in machine learning systems”. Hrw.org, 03.07.2018. Available online: https://www.hrw.org/news/2018/07/03/toronto-declaration-protecting-rights-equality-and-non-discrimination-machine.

46. RISSE, Mathias. Op. cit, p. 10.

47. Loc. cit.

48. Loc. cit.

49. BACCIARELLI, Anna. “Artificial Intelligence: the technology that threatens to overhaul our rights”. Amnesty International, 20.06.2016. Available online: https://www.amnesty.org/en/latest/research/2017/06/artificial-intelligence-the-technology-that-threatens-to-overhaul-our-rights/

Category: