Human Rights & Emerging Technologies: AI’s Ethical Ambiguities
Emerging technologies, including virtual and augmented realities, biotechnologies, 3D printing, and artificial intelligence (AI), present exciting opportunities for society and investors alike to solve new problems and streamline existing processes. However, new technologies are not devoid of risk. As cutting-edge innovations become more widely implemented and develop additional applications, new challenges will arise, and leading companies will need to integrate risk-mitigating considerations into their product development lifecycles with robust, post-commercialization monitoring frameworks thereafter.
In conjunction with its 2017 Global Risks Report (GRR), the World Economic Forum (WEF) published an article detailing the benefits and risks of 12 key technological innovations.
The WEF’s survey group – consisting of stakeholders from businesses, governments, academia, and nongovernmental and international organizations – identified AI and robotics as the technologies with the highest potential benefits and, simultaneously, the highest potential risks. AI in particular has sparked well-documented anxiety in blue and white-collar workplaces alike due to its potential to automate human job functions. However, beyond the more obvious risks associated with automation, the WEF also raises concerns over AI’s capacity (or lack thereof) to make difficult, morally-ambiguous decisions. The WEF asks, for instance, “in a world where machines are powered by artificial intelligence…if a self-driving car has the choice to either crash into a person crossing the street, or crash into a wall injuring, or possibly killing, its passengers, how can it make that split-second decision, one that even humans struggle with?”
Biased Outcomes from AI & Human Rights Implications
Furthermore, in a 2018 white paper, the WEF’s Global Future Council on Human Rights and Technology outlines discrimination risks associated with AI and machine learning. When designed and used responsibly, AI systems can increase efficiency and help to eliminate human biases and errors in decision-making. Conversely, poorly designed AI systems can reinforce systemic biases, automate discriminatory practices, and ultimately violate basic human rights. The WEF argues that a variety of factors can trigger such unfavorable outcomes, including: machine learning based on unrepresentative data samples, choosing the wrong model, building a model with unconsciously discriminatory assumptions, developing AI without some level of humanistic input, and more.
Drawing on guidance from the UN Guiding Principles on Business and Human Rights, the white paper’s authors propose four principles to preempt AI biases:
Active Inclusion – soliciting a diversity of normative values when developing a system
Fairness – considering which definition of fairness best fits the product’s application
Right to Understanding – enabling end-users to understand the system’s decision-making model
Access to Redress – ensuring that AI developers monitor their technology’s potential negative human rights impacts and effectively provide remedies to affected individuals
Getting Ahead of the Curve
The WEF’s 2017 article encourages developers to collaborate with domestic and international governments to establish ethical parameters for AI and other emerging technologies (to the extent that innovation persists). However, its 2018 white paper recognizes that regulatory developments frequently lag behind business innovations in practice. Thus, in order to properly uphold these principles, the WEF recommends that companies take initiative and proactively adopt a three-step “human rights due diligence” process. This process should: identify human rights risk linked to business operations, take effective action to prevent and mitigate risks by rethinking existing business ethics models, and encourage transparency through published third-party audits on company efforts to identify, prevent, and mitigate human rights risks. The WEF’s human rights due diligence framework can, and should, be incorporated into the development of all emerging technologies (internet of things, blockchain, biotechnologies, etc.). Furthermore, the framework is broad enough that it can be applied to other relevant, contemporary concerns beyond machine learning bias, including carbon emissions, cyber security, and other considerations relating to emerging technologies.
Regulations aimed at emerging technologies are likely on the horizon, although their efficacies remain to be seen. In early April 2019, the European Union (EU) published a set of non-binding guidelines. The Verge’s James Vincent outlines the EU’s new guidelines and discusses how companies should approach ethical AI developments. According to the EU, companies should consider the following factors: human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination, and fairness; environmental and societal well-being; and accountability. However, many industry experts question the EU’s capacity to exert meaningful influence on the ethical machine learning landscape. Regulations generally move slowly and cannot account for all possible advancements and applications of emerging technologies—many of which may have significant human rights implications. For this reason, companies at the forefront of emerging technology development must self-assess and self-regulate. Regardless of regulatory advancements, on-the-ground developers will bear ultimate responsibility for maintaining product stewardship standards, rendering frameworks like the WEF’s human rights due diligence process of the utmost importance.
This article originally appeared in Malk’s ESG in Private Equity Newsletter, sent out on May 9th, 2019. You can sign up for the quarterly newsletter here.