Mapping the ethics of AI

AI4People_brochure_pag13

By mapping the broad ethical dangers of AI, we can more readily confront specific issues that arise from using AI. Of course, there will be many ethical issues with consequences falling into more than one category. So the above figure should be seen as a magnifying glass through which AI’s ethical challenges may best interpreted. Sometimes, the glass will return important details under each “colour”.

Take for example security screening for air travel, which poses at least five several distinct ethical challenges as recently outlined:

1. This is a sensitive task requiring that humans can supervise and intervene to ensure the system operates; we must therefore guard against reducing or eliminating the ability for humans to supervise and intervene where necessary to ensure safe operation.

2. If something does go wrong, we risk being unable to hold anyone responsible if it is unclear which tasks were delegated and by whom.

3. If we delegate more and more of this task to AI, and it seems to work perfectly, preventing every potential attack, there is a risk that over time we will devalue human skills to the point that when something does eventually go wrong, we will lack to the resources required to intervene effectively.

4. If we blindly trust the effectiveness of artificial screening without knowing how it works, we may lose the ability to determine policies that maximise overall wellbeing. If the threat of attacks becomes far lower over time, people may feel that the indignity and inconvenience of the process is no longer justified by the risk. But an AI system whose only goal is to maintain security may nudge us in a different direction, without allowing policymakers to assess critically what makes it so successful. Over time, these debates could be one-sided, with a goal like security glorified beyond reproach at the expense of other principles like dignity.

5. There is always a risk of the screening system being hacked, or wrongly used, in a way that would make it vulnerable to systematic manipulation at scale.

This example is just one of the many thorny ethical issues that particular applications of AI raise. But it highlights the value of approaching the ethics of AI systematically. Outlining the distinct challenges of AI at the outset will enable the AI4People project to grapple with specific case examples as they arise.

In the following pages, we describe in more detail the proposed roadmap for AI4People, including the particular milestones which will guide the project towards the goal of developing a Good AI Society.