Blog

Class aptent taciti sociosqu ad litora

Regulating Ethical Artificial Intelligence

  • June 28, 2019
  • Clayton Rice, K.C.

In June 2018 the European Commission set up the High Level Expert Group on Artificial Intelligence (HLEG AI) consisting of fifty-two representatives from academia, civil society and industry. The HLEG AI was charged with the general objective of supporting the implementation of the European strategy on artificial intelligence. It has now released the second of two reports on ethics guidelines, and policy and investment recommendations, for Trustworthy AI.

1. Ethics Guidelines

The first report, Ethics Guidelines for Trustworthy AI, was released on April 8, 2019, with the aim of promoting trustworthy artificial intelligence that should have three components. It should be lawful, ethical and robust. The guidelines, however, do not explicitly address the first component of lawfulness but aim to offer guidance on fostering and securing ethical and robust AI. But, AI systems “do not operate in a lawless world” and the report examines sources of law that are relevant to their development, deployment and use.

The report advocates an approach to AI ethics based on the fundamental rights contained in EU treaties, the EU Charter and international human rights law. “Respect for fundamental rights within a framework of democracy and the rule of law,” the report states at pp 9-10, “provides the most promising foundation for identifying abstract ethical principles and values, which can be operationalized in the context of AI.” The framework is grounded in a “human-centric approach” in which the human being “enjoys a unique and inalienable moral status of primacy in the civil, political, economic and social fields.” The following groups of fundamental rights are identified as suitable for application to AI systems, at pp 10-1:

  • Respect for human dignity. Human dignity encompasses the idea that every human being possesses an “intrinsic worth”, which should never be diminished, compromised or repressed by others – nor by new technologies like AI systems.
  • Freedom of the individual. Human beings should remain free to make life decisions for themselves. In an AI context, freedom of the individual for instance requires mitigation of (in)direct illegitimate coercion, threats to mental autonomy and mental health, unjustified surveillance, deception and unfair manipulation.
  • Respect for democracy, justice and the rule of law. AI systems should serve to maintain and foster democratic processes and respect the plurality of values and life choices of individuals. AI systems must not undermine democratic processes, human deliberation or democratic voting systems.
  • Equality, non-discrimination and solidarity. In an AI context, equality entails that the system’s operations cannot generate unfairly biased outputs (e.g., the data used to train AI systems should be as inclusive as possible, representing different population groups). This also requires adequate respect for potentially vulnerable persons and groups, such as workers, women, persons with disabilities, ethnic minorities, children, consumers or others at risk of exclusion.
  • Citizens’ rights. AI systems offer substantial potential to improve the scale and efficiency of government in the provision of public goods and services to society. At the same time, citizens’ rights could also be negatively impacted by AI systems and should be safeguarded. The use of the term “citizens’ rights” is not to deny or neglect the rights of third-country nationals and irregular (or illegal) persons in the EU who also have rights under international law, and – therefore – in the area of AI systems.

Drawing inspiration from European and international human rights law, the report goes on to specify four ethical principles as “imperatives” that AI practitioners should strive to adhere to, at pp 12-3:

  • Respect for human dignity. Humans interacting with AI systems must be able to keep full and effective self-determination over themselves, and be able to partake in the democratic process. AI systems should not unjustifiably subordinate, coerce, deceive, manipulate, condition or herd humans.
  • Prevention of harm. AI systems should neither cause nor exacerbate harm or otherwise adversely affect human beings. This entails the protection of human dignity as well as mental and physical integrity.
  • Fairness. Fairness has both a substantive and procedural dimension. The substantive dimension implies a commitment to ensuring equal and just distribution of both benefits and costs, and ensuring that individuals and groups are free from unfair bias, discrimination and stigmatization. The procedural dimension of fairness entails the ability to contest and seek effective redress against decisions made by AI systems and by the humans operating them.
  • Explicability. Explicability is crucial for building and maintaining users’ trust in AI systems. This means that processes need to be transparent, the capabilities and purpose of AI systems openly communicated, and decisions – to the extent possible – explainable to those directly and indirectly affected.

Tensions may arise between these ethical principles that may be resolved by a balancing analysis. The report uses “predictive policing” as an example that raises a conflict between the principle of prevention of harm and that of human autonomy. There are, however, situations where an ethical trade-off is not acceptable. Certain fundamental rights, such as human dignity, are absolute and cannot be subject to a balancing exercise.

2. Policy and Investment Recommendations

The second report, Policy and Investment Recommendations, was released this week on June 26, 2019, and contains thirty-three recommendations that can guide Trustworthy AI towards “sustainability, growth and competitiveness”. The recommendations focus on: (1) humans and society at large; (2) the private sector; (3) the public sector; and, (4) Europe’s research and academia. Here are four key recommendations.

First, in the section dealing with Europe’s public sector, at p 20, the report calls for a ban on AI- enabled mass scale scoring of individuals as defined in the Ethics Guidelines. It goes on to recommend strict rules for “surveillance for national security purposes” and the development of ways to ensure that surveillance “is not used to suppress or undermine (political) opposition or democratic processes.”

Second, in the section on governance and regulatory framework, at pp 37-8, the report advocates a risk-based approach to regulation arguing that “various risk classes should be distinguished as not all risks are equal”. The higher the impact or probability of an AI-created risk, the stricter the regulatory response should be.

Third, in the discussion of adverse impacts, the report recommends continuous evaluation of whether AI systems generate risks not adequately covered by existing legislation. The report goes on to state, at p 40, that individuals should not be subject to “unjustified personal, physical or mental tracking or identification, profiling and nudging through AI powered methods of biometric recognition”. Biometric recognition includes emotional tracking, behavioural identification and affect recognition that will increase with advancements in AI systems.

Fourth, the report calls for a proposal to be made to international partners for the adoption of a moratorium on the development of lethal autonomous weapons systems (LAWS). These systems are popularly called killer robots. The report suggests, at p 40, that “cyber attack” should be included as they can achieve lethal consequences.

The recommendations on the impact of AI-systems on individual self-determination, and their use as an instrument of mass state surveillance, brought immediate coverage in the online media. I have commented in a previous post on China’s social credit system and the use of facial recognition technology to monitor its minority Uighurs population. In a post to POLITICOEurope titled AI experts call to curb mass surveillance dated June 24, 2018, Janosch Delcker said that, although artificial intelligence offers “vast opportunities” for the benefit of humanity, it is open to abuse “by authoritarian regimes to set up a ubiquitous surveillance apparatus.” (See also: On The Wire. Face Recognition Technology. April 28, 2019)

3. Conclusion

The policy recommendations made by the EU High Level Expert Group on AI are not the only ethics principles and guidelines published recently. On May 22, 2019, forty-two countries, including Canada, adopted the principles on artificial intelligence of the Organization for Economic Co-operation and Development (OECD). On June 9, 2019, the G20 nations adopted human-centered AI principles based on those of the OECD.

In a post to DataEthics.eu titled EU High Level Expert Group on AI launches policy recommendations dated June 26, 2019, independent data ethics advisor and HLEG AI member, Gry Hasselbalch, asserted that the EU guidelines differ from the OECD principles in one central way – the EU guidelines have a framework for enforcement and implementation. “[T]he guidelines’ core foundation,” she wrote, “is a European Fundamental Rights framework that in fact forms part of the core principles of European law.”

The next European Commission is scheduled to take office on November 1, 2019, and regulation of artificial intelligence will be a priority as Europe hastens to keep pace with the United States and China as a leader in AI technology. The debate among policymakers is expected to centre on whether the best regime is one guided by bright lines or broad principles – and how to avoid killing innovation by regulation. “We want to have the development of AI,” Justice Commissioner Vera Jourova said, “but not at the expense of fundamental rights and freedoms.” (See also: James Vincent. EU should ban AI-powered citizen scoring and mass surveillance, say experts. The Verge. June 26, 2019)

Comments are closed.