AI4People 2019 Second Forum Meeting, 10 July, European Parliament

AI4People is a multi-stakeholder forum, bringing together all actors interested in shaping the social impact of new applications of AI, including the European Commission, the European Parliament, civil society organisations, industry and the media.

Launched in February 2018 with a three year roadmap, the goal of AI4People is to create a common public space for laying out the founding principles, policies and practices on which to build a “good AI society”.

In November 2018, on behalf of AI4People and its partners, Prof. Luciano Floridi presented at the European Parliament the “AI4People’s Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations”, result of AI4People’s first year of activity.

Looking forward the second year, AI4People will provide an opportunity to build upon the important achievements of the first year of the Forum.

The first AI4People Forum Meeting of 2019 was held on April 2nd at the European Parliament, Brussels, where the Agenda for the AI4People’s Scientific Activities for the year was presented.

The second AI4People Forum Meeting 2019 will be held on July 10th in Brussels and will be presenting the initial draft and closing the 1st phase of the drafting of the “Report on Good AI Governance: Principles, Priorities, and Models of Smart Coordination”. Members of the AI4People’s Forum, of the European Parliament and European Commission will be discussing the Report and contributing for the further phase of drafting.

Following the AI4People’s Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations, the activities in 2019 will focus on the “Governance of AI”. The Recommendations of this document and its 20 Action Points (AP) are the starting point upon which the 2019 activities will be built on; most of the issues on how to assess, to develop and to support a Good AI Society entail complex matters of governance.

Accordingly, the first draft of new AI4People’s Report on Good AI Governance will be divided into three parts.

Part one will focus the principles of AI. These principles were first illustrated in our 2018 paper and have since been adopted – also but not only – by the AI HLEG’s Ethics Guidelines for Trustworthy AI from April 2019. In stressing the link between our 2018 and 2019 work in the background section of this document, the aim is to highlight a substantial convergence on some ethical issues of AI and their corresponding guidelines, much as a substantial part of today’s law that is already applicable to AI, e.g.,tortious liability of all EU Member States.

The second part of the document will focus on the priorities of today’s AI governance. The ethical principles and recommendations of our 2018 paper are prioritized according to that which can be deemed as good, right, or lawful, and moreover, that which can be done now; (whereas a second kind of priority concerns that which we reasonably think is good, but which will take time to implement).The document thus proposes three different types of priority. These regard (i) forms of engagement; (ii) no-regrets actions; and (iii) coordination mechanisms.

The forms of engagement concern:

i. The use of participatory mechanisms to ensure alignment with societal values and understanding of public opinion through ongoing dialogue among all the stakeholders;

ii. A European observatory for AI to provide a forum to nurture debate and consensus;

iii. Interdisciplinary and intersectoral cooperation and debate concerning the overlaps among technology, social issues, legal studies, and ethics;

iv. The setting up of legally de-regulated special zones, or living labs, for AI empirical testing and development, so that scientists and lay people can understand whether AI systems fulfill their task specifications in ways that are acceptable and comfortable to humans, while increasing our understanding of how the future of the human-AI interaction could turn out.

The set of no-regrets actions include:

i. The creation of educational curricula and public awareness activities around the impact of AI, involving schools, academia, qualification programs in business, and the public at large;

ii. A sustained, increased and coherent European research effort, which provides for the inclusion of ethical, legal and social considerations in AI research projects, together with research about public perception and understanding of AI and its applications;

iii. The idea of “inclusive innovation” and smooth transitions to new kinds of jobs via rewarding human-machine collaboration;

iv. The capacity of corporate boards of directors to take responsibility for the ethical implications of companies’ AI technologies.

Finally, the coordination mechanisms comprise:

i. A European observatory for AIto accommodate the uncertainties of innovation through appropriate forms of oversight;

ii. Participatory procedures for the alignment of societal values and the understanding of public opinion;

iii. Multi-stakeholder mechanisms upstream for risk mitigation;

iv. Systems for user-driven benchmarking of all marketed AI offerings;

v. Interdisciplinary and intersectoral cooperation and debate incentivization.

Part three of the document will focus on models of AI governance and corresponding forms of legal regulation. The complexity of today’s legal and moral issues on AI regulation recommends specific forms of governance that are neither bottom-up, nor top-down. In the EU legal framework, this middle-out layer of governance –i.e. between the top-down and bottom-up approaches– is mostly associated with forms of co-regulation, as defined by Recital 44 of the 2010 AVMS Directive and Article 5(2) of the GDPR. The document intends to show why neither co-regulative models of AI governance nor forms of self-regulation nor its variants, e.g., “monitored self-regulation”, are good enough to tackle the normative challenges of AI. Rather, the bar is set between models of self-regulation and co-regulation, since the approach considers both the existence and limits of current regulatory frameworks, as examined above with the principles of AI.

The aim of drawing attention to the coordination mechanisms examined in part two is threefold. First, we prevent the chilling effect of some ideological debates on the ethics of AI and the complexity of the legal environment. Ourstance is pragmatic, for the coordination mechanisms of the model aim to implement the sets of priorities illustrated previously in part two. Second, the stance is dynamic, meaning that it isadaptable and flexible, reflective and responsive, and takes into account how different sets of priorities may evolve, and how regulators can approach this evolution through procedures for engagement and systemic oversight. Third, the action plan is scalable, because growing amounts of work can be suitably addressed by adding resources to the mechanisms set up for coordination.Forms of legal experimentation, such as the special zones for AI empirical testing and development established over the past years in Japan (AI robotics), Germany and Sweden (self-driving cars), or Belgium (drones), illustrate this class of coordination mechanisms and their scalability.

There is no need to put off implementation of the three sets of priorities for the field of AI and its governance as introduced in the second part of this research. These sets of priorities recommend a model of governance that should neither hinder responsible technological innovation, nor require over-frequent revision to achieve such progress. The model of smart coordination we recommend provides solutions that are (i) clear enough to impose society’s preferences on emerging innovation, while (ii) sufficiently flexible enough to accommodate the uncertainties of technological advance and, at the same time, (iii) agile enough to capture expanding understanding of AI with increasing regulatory granularity.