After the AI4People’s Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations, the activities in 2019 will focus on the “Governance of AI”. The Recommendations of the aforementioned document and its 20 Action Points (AP) will be the starting point upon which we will build on in the year 2019. Most of the issues on how to assess, to develop and to support a Good AI Society entail complex matters of governance.
1. AI4People Forum. At the heart of the second year will be the activity of the AI4People Forum, open to representatives of governments around the world, European institutions, civil society organisations, relevant media and leading businesses. Members will be invited to attend three meetings in Brussels and participate to the various phases of drafting the Good AI Governance Report (for more information please ask the Action Program to the AI4People’s Secretariat):
►April 2nd 2019: Presentation of the Scientific Activities for 2019.
►July 10th 2019: Presentation of the initial draft.
►November 2019: AI4People Final Event: Presentation of the Good AI Governance Report at the European Parliament.
2. Scientific Activity on: Governance of AI. The Governance of AI sets the level of abstraction, in order to properly tackle four magnitudes of complexity on (i) regulation; (ii) know-how; (iii) possible solutions; and, (iv) evaluation, that is, who, how, what, and why. More particularly, the intricacy regards:
(i) The interaction between different regulatory systems. In addition to the rules of national and international lawmakers, consider the role that the forces of the market, or of social norms, play in this context, e.g. trust. These regulatory systems can compete, and even clash with each other. Several approaches capture part of this complex interaction, for example, through models of risk regulation, and of risk governance. Next year (2019), before Election Day in May, the EU Commission’s documents on a new legal framework for AI and its ethical principles should be available. Also, but not only, in light of these documents, the aim is to flesh out what kind of governance is emerging, and we should opt for in the field of AI, before the new EU Parliament and Commission;
(ii) The interaction between different kinds of expertise. Matters of institutional design regard the ways in which economics and the law, technology and social norms interact. As shown by today’s debate on the weight that AI evidence should have in legal trials, problems concern different heuristics, and how we should govern them. As our previous recommendations on auditing mechanisms, or an oversight agency, illustrate, focus is not on ‘who’ takes decisions (previous magnitude of complexity), but rather on ‘how.’ The sets of challenges that are unique to AI renew this kind of problem in terms of governance;
(iii) Solutions. The ethical framework provided in our previous document insisted on the context-dependent issues brought about by AI. It is clear for example that the complex set of provisions regulating the production and use of AI for autonomous vehicles scarcely overlaps with that of AI appliances for smart houses, for finance, etc. The same holds true in terms of governance. The latter may require the adoption of ‘central authorities,’ such as anew EU oversight agency responsible for the protection of public welfare, but governance solutions will often depend on the specificity of the issues and AI domainswe’re dealing with.Here, matters revolve around ‘what,’ rather than ‘who’ or ‘how';
(iv) Evaluations. The final magnitude of complexity regards the setting of standards. This stance partially overlaps with our first stance onhow different regulatory systems interact, for we have legal standards, social standards, technical standards, etc. Attention is drawnhere, however, to how such standards evolve, and new standards are requiredtoday. The question concerns ‘why,’ rather than ‘who,’ ‘how, ‘ or ‘what.’ The ethical framework for a good AI society has to be implemented through a good AI governance. Standardswill play a major role in thisaccomplishment.
Accordingly, we propose three lines of enquiry that regard:
(i) Forms of Engagement, such as e.g. participatory mechanisms to ensure alignment with societal values and understanding of public opinion through an ongoing dialogue between all stakeholders;
(ii) No-regrets Actions (e.g. education and a sustained, increased and coherent European research effort);
(iii) Coordination Mechanisms that may function as the middle-out interface of our priorities. Such mechanisms should help tackling current limits on any clear understanding of AI stakes; consolidate new fora for collective consultation and discussion; and develop new standards and mechanisms for (enforced) self-regulation. Have a look at the figure below:
The next step of the research is to specify the content of these priorities, and whether a further level of intervention should be taken into account.