Whether judged by the yardstick of media coverage or academic funding, industrial investment or political interest, the impact that AI is having on society is no longer a niche concern. Each day brings both exciting new breakthroughs and disturbing new warnings about the threats that AI poses to human life. But between over-optimistic prophecies on one side and the divination of doomsayers on the other, there is space for clear-eyed, evidence-based analysis of the actual opportunities and threats to society presented by artificial intelligence.
It is within this middle ground that AI4People will operate. The goal is to create a shared public space for laying out the founding principles on which to build a “good AI society”.
For this to succeed, we need to agree how best to nurture human dignity, to foster human flourishing, and to take care of the wider world. This is not just a matter of legal acceptability and compliance with what may or may not be done; it is a matter of ethical preferability and commitment to what should or should not be done.
It is easy to support the principle of developing AI that is ethical; it is rather harder to make it so.
Achieving this ambitious objective for AI4People will require, first, a systematic ethics of AI. Only with the foundations of this ethical system in place will we be able to assess and redress real-world applications of AI. If we get these foundations wrong, any resulting assessment will be flawed, misleading, and dangerously distracting.