The 5 challenges

Reducing human control

There is the risk of delegating important tasks to autonomous systems that should remain at least partly subject to human supervision, either ‘in the loop’, for monitoring purposes, or ‘post loop’, for redressing errors or harms that arise.

Removing human responsibility

AI may deresponsibilise people whenever an AI system could be blamed for a failure instead.
This may make it harder to hold people accountable and liable for particular failures of AI.

Devaluing human skills

AI may deskill people, with potentially dangerous effects for sensitive, skill-intensive domains such as medical diagnosis and aviation. If, for example, a couple of decades hence, there are too few human experts able to diagnose cancer, society would be ill-equipped for AI malfunction or malevolent attack.

Eroding human self-determination

AI may erode human self-determination, as it may lead to unplanned and unwelcome changes in human behaviours to accommodate the routines that make automation work and people’s life easier. AI’s predictive power and relentless nudging, even if unintentional, should foster and not undermine human dignity and self-determination. Yet such nudging has been both commercialised – as in online retail recommendations – and, allegedly, weaponised, as public opinion falls prey to subtle but powerful propagandist bots, which can potentially affect the outcomes of national elections.

Enabling human wrongdoing

We must consider the potential for malevolent uses of AI. This powerful technology falling into the wrong hands would pose grave threats to the security and prosperity of us all. One way of preventing malevolent use of AI is to adopt the formulation that we should treat people as ends in themselves, and never only as a means.