The 1st AI4People’s Meeting will be held on April, 2nd at the European Parliament, Brussels.
The aim of the meeting is to present and discuss the Programme for 2019 with the Scientific Committee, the Partners (Business Partners and Civil Society Organizations) and MEPs, in order to collect inputs for the first part of the activities.
After An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations, released on November 2018 the next step of the AI4’s Scientific Committee will focus on the “Governance of AI”. The Recommendations of our previous document and its 20 Action Points (AP) are the starting point of the work. Most of the issues on how to assess, to develop, to incentivise and to support a Good AI Society entail complex matters of governance.
The Governance of AI sets the level of abstraction, in order to properly tackle four magnitudes of complexity on (i) regulation; (ii) know-how; (iii) possible solutions; and, (iv) evaluation, that is, who, how, what, and why. More particularly, the intricacy regards:
(i) The interaction between different regulatory systems. In addition to the rules of national and international lawmakers, consider the role that the forces of the market, or of social norms, play in this context, e.g. trust. These regulatory systems can compete, and even clash with each other. Several approaches capture part of this complex interaction, for example, through models of risk regulation, and of risk governance. Next year (2019), before Election Day in May, the EU Commission’s documents on a new legal framework for AI and its ethical principles should be available. Also, but not only, in light of these documents, the aim is to flesh out what kind of governance is emerging, and we should opt for in the field of AI, before the new EU Parliament and Commission;
(ii) The interaction between different kinds of expertise. Matters of institutional design regard the ways in which economics and the law, technology and social norms interact. As shown by today’s debate on the weight that AI evidence should have in legal trials, problems concern different heuristics, and how we should govern them. As our previous recommendations on auditing mechanisms, or an oversight agency, illustrate, focus is not on ‘who’ takes decisions (previous magnitude of complexity), but rather on ‘how.’ The sets of challenges that are unique to AI renew this kind of problem in terms of governance;
(iii) Solutions. The ethical framework provided in our previous document insisted on the context-dependent issues brought about by AI. It is clear for example that the complex set of provisions regulating the production and use of AI for autonomous vehicles scarcely overlaps with that of AI appliances for smart houses, for finance, etc. The same holds true in terms of governance. The latter may require the adoption of ‘central authorities,’ such as anew EU oversight agency responsible for the protection of public welfare, but governance solutions will often depend on the specificity of the issues and AI domainswe’re dealing with.Here, matters revolve around ‘what,’ rather than ‘who’ or ‘how';
(iv) Evaluations. The final magnitude of complexity regards the setting of standards. This stance partially overlaps with our first stance onhow different regulatory systems interact, for we have legal standards, social standards, technical standards, etc. Attention is drawnhere, however, to how such standards evolve, and new standards are requiredtoday. The question concerns ‘why,’ rather than ‘who,’ ‘how, ‘ or ‘what.’ The ethical framework for a good AI society has to be implemented through a good AI governance. Standards will play a major role in this accomplishment.