AI4People was born as a research/policy project following a pioneering initiative launched in 2018 by Professor Luciano Floridi, Michelangelo Baracchi Bonvicini, Tony Blair, Atomium-EISMD and the founding members (Audi, Elsevier, Facebook, Fujitsu, Google, Intesa SanPaolo, Johnson & Johnson and Microsoft) to shape the debate on AI Ethics in the European Union and ask European institutions to act quickly to stem future AI risks.
Today AI4People Institute (non-profit organization registered in Belgium) brings together leading AI companies, academics, civil society groups and governments to consider the risks of AI and advocate for responsible and beneficial AI development and deployment. Close contact with leading policy makers is one of the key factors in the success of its action which is at the origin of the regulatory process in Europe (AI Act).
AI4People focuses on various aspects related to AI, including ethics, policy, governance, privacy, inclusivity, and the responsible use of AI across different domains.
Key objectives of AI4People Institute include:
1. Ethical Frameworks: Developing ethical guidelines and frameworks to guide the development and application of AI, emphasizing fairness, transparency, accountability, and inclusivity.
2. Public Awareness and Education: Raising awareness among the public regarding AI’s potential and challenges, fostering understanding, and promoting informed discussions on its societal impact.
3. Policy Recommendations: Providing recommendations to policymakers and regulatory bodies to develop appropriate laws and regulations that govern AI usage, ensuring it aligns with societal values and objectives.
4. Inclusivity and Diversity: Encouraging diverse perspectives and inclusivity in AI development to minimize bias and enhance AI systems’ understanding and representation of all individuals.
5. Privacy and Data Protection: Advocating for robust data privacy measures and emphasizing the importance of securing and protecting individuals’ data in AI applications.
6. Collaboration and Knowledge Sharing: Facilitating collaboration among stakeholders, sharing best practices, research findings, and insights to foster a collective effort towards responsible AI.
For more information please contact: firstname.lastname@example.org
AI4People was established in February 2018 following a meeting in 2017 during the Atomium’s Next Generation Internet Summit between Prof Luciano Floridi, Founding Director of the Digital Ethics Center at Yale University and Michelangelo Baracchi Bonvicini, President of Atomium-EISMD. Six months later the European Commission established the High-Level Expert Group on Artificial Intelligence, in which many scientific members of AI4People were engaged.
AI4People’s first year of activity led to the “AI4People’s Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations”, presented to the European Parliament in December 2018 by Prof. Luciano Floridi, Michelangelo Baracchi Bonvicini and Tony Blair. This important achievement was the starting point for the High Level Group on AI created by the European Commission and guided to the Ethics Guidelines for Trustworthy Artificial Intelligence presented in April 2019, which served as the basis for the Artificial Intelligence Act (AI Act).
After this the AI4People’s scientific activities led the “Report on Good AI Governance: 14 Priority Actions, a S.M.A.R.T. Model of Governance, and a Regulatory Toolbox“, presented to the the European Parliament on November 6th.
In 2020, AI4People has identified seven strategic sectors (Automotive, Banking & Finance, Energy, Healthcare, Insurance, Legal Service Industry, Media & Technology) for the deployment of ethical AI, appointing 7 different committees to analyse how can trustworthy AI be implemented in these sectors: the “AI4People’s 7 AI Global Frameworks“, are the result of this effort, presented to the European Parliament on December 1, 2020.
This work contributed to the drafting of the AI Act and supported the policy makers involved in the process.
In 2022 AI4People organised some open-door conversations on certifications, assurance and sandboxes:
In order to follow the results of the First AI4People’s Conversation (December 1, 2021) please see here.
In order to follow the results of the Second AI4People’s Conversation (March 23, 2022) please see here.
In order to obtain more information about the Third AI4People’s Conversation (December 6, 2022) please see here.
Current Activities: Towards an Ethical Impact Assessment for AI
The EU AI Act represents a crucial milestone in regulating the development, deployment, and utilization of artificial intelligence (AI). Likewise, discussions on AI ethics have delved deeper into the specific ethical challenges posed by AI systems. These parallel advancements underscore the imperative for conducting impact assessments to scrutinize the ethicality of existent and upcoming AI solutions. The proposed whitepaper aims to support organizations with an exposure to the European Union that undertake efforts to evaluate the impact of high-risk AI solutions from an ethical lens. The aspiration of the Whitepaper is to create a state of the art for ethical impact assessments of AI systems.
To this end, the whitepaper “Towards an Ethical Impact Assessment for AI“ will elaborate on the
a) implications of existing ethical and legal frameworks for the conduct of impact assessments for AI,
b) develop a scheme and methodology for determining the ethical impact of AI systems and
c) elaborate on the process and documentation required when performing an ethical impact assessment for AI solutions.
Why AI4People? AI4People stands for a European approach on the governance and ethics of AI. It is therefore composed of European experts in the fields of AI ethics as well as representatives of international companies involved in the development, deployment and use of AI. A key mission of AI4People to create an ethical impact assessment driven by values of the European Union.
For more information or collaboration proposals please contact: email@example.com
The entities involved in creating the whitepaper are the working group organized into different work streams and the scientific committee, which challenges and reviews the work conducted by the working group. The working group drives the conceptualization and realization of the white paper delivering its final proposal to the scientific committee. The scientific committee guarantees for the validity of the final white paper.
The working group consists of experts in the fields of AI ethics from different perspectives (academia, auditing & industry). The members are Raja Chatila (Sorbonne Université), Hiroya Inakoshi (Fujitsu Limited), Virginia Ghiara (Fujitsu Limited), Katie Evans (ex IEEE and UNESCO), Bianca de Teffé (Deloitte), Sergei Bobrovskyi (Airbus) and Alexander Kriebitz (Technical University of Munich). The members of the working group elaborate on different working streams:
- Work stream 1 [WS 1]: The main task of WS 1 is to identify the ethical and legal requirements for ethical impact assessments of AI based on the state of the art in European AI regulation and the European AI ethics discourse.
- Work stream 2 [WS 2]: The main task of this working stream is to design an overall scheme of a tentative AI ethics impact assessment, as well as to develop the criteria and structure of the assessment.
- Work stream 3 [WS 3]: This work stream focuses on the process of the ethical impact assessment based on the existing state of the art in AI auditing. It clarifies would kind of data input is needed to inform an ethical impact assessment, but also the level of scrutiny when reviewing an AI system.
The publication of the whitepaper is scheduled for March 2024 coinciding with major legal developments in the European Union and will be presented to the European Parliament and the European Commission during the AI4People Summit “Towards a Good AI Society”.
AI4People is registered in the EU Transparency Register under the number 343652748893-89