The AI4People’s Ethical Framework
for a Good AI Society
In November 2018, on behalf of AI4People and its partners, Prof. Luciano Floridi presented at the European Parliament the “AI4People’s Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations”, result of AI4People’s first year of activity. This important result served as inspiration to the European Commission and guided the identification of the 7 Key Requirements for a Trustworthy AI presented by the Commission in April 2019.
AI4People was set up to help steer this powerful force towards the good of society, everyone in it, and the environments we share. AI is not another utility that needs to be regulated once it is mature. It is a powerful force, a new form of smart agency, which is already reshaping our lives, our interactions, and our environments. For more information related to AI4People please see here.
The AI4People’s Ethical Framework for a Good AI Society is the outcome of the collaborative effort by the AI4People to propose a series of recommendations for the development of a Good AI Society. It synthesises three things: the opportunities and associated risks that AI technologies offer for fostering human dignity and promoting human flourishing; the principles that should undergird the adoption of AI; and twenty specific recommendations that, if adopted, will enable all stakeholders to seize the opportunities, to avoid or at least minimise and counterbalance the risks, to respect the principles, and hence to develop a Good AI Society.
The opportunities and risks of AI for Society
Establishing an ethical framework for AI in society requires an explanation of the opportunities and risks that the design and use of the technology presents. We identify four ways in which, at a high level, AI technology may have a positive impact on society, if it is designed and used appropriately. Each of these four opportunities has a corresponding risk, which may result from its overuse or misuse. There is also an overarching risk that AI might be underused, relative to its potential positive impact, creating an opportunity cost. An ethical framework for AI must be designed to maximise these opportunities and minimise the related risks.
A unified framework of principles for AI
Several multistakeholder groups have created statements of ethical principles which should guide the development and adoption of AI. Rather than repeat the same process here, we instead present a comparative analysis of several of these sets of principles. Each principle expressed in each of the documents we analyse is encapsulated by one of five overarching principles. Four of these – beneficence, nonmaleficence, autonomy, and justice – are established principles of medical ethics, but a fifth – explicability – is also required, to capture the novel ethical challenges posed by AI.
Twenty recommendations for a Good AI Society
AI4People offer 20 concrete recommendations tailored to the European context which, if adopted, would facilitate the development and adoption of AI that maximises its opportunities, minimises its risks, and respects the core ethical principles identified. Each recommendation takes one of four forms: to assess, to develop, to incentivise, or to support good AI. These recommendations may in some cases be undertaken directly by national or supranational policy makers, and in others may be led by other stakeholders. Taken together with the opportunities, risks and ethical principles we identify, the recommendations constitute the final element of an ethical framework for a good AI society.
Download the full report