Since AI may be understood in many different ways, we rely on a long-held and widely-understood definition. AI is whatever technology we develop and use to deal with tasks that “would be called intelligent if a human were so behaving”. This definition dates back to a proposal written in 1955 for a summer research project on artificial intelligence. The research project, which took place at Dartmouth College in 1956, is often taken to be the founding moment for artificial intelligence as a discipline. Notably, this definition is counterfactual, since AI’s capacity is defined not as intelligent in itself, but such that, if a human were to achieve the same result, then that human would have to be intelligent. AI might now be more efficient or successful at a given task – such as winning a game of Go – than any human alive today. What this means is that if human players were that good, they would have to be really intelligent indeed. AI is not and, crucially, does not have to be.
We have so far only defined what AI does; more important still is what AI is: a growing resource of interactive, autonomous, and self-learning agency. AI does not need to be considered “intelligent”, or “conscious”, or “lifelike”, in order to pose serious risks to society as we know it.
As a smart form of agency, AI has great potential to fundamentally reshape society. With AI technology, we are no longer at the centre of the space of information (or “infosphere”); instead, we share it with the digital technologies which surround us. They can collect, store, process data like us and, increasingly often, much better than us. This has major implications for our relationships both with each other, and with our technology.
So even though smart technologies are better than us at accomplishing tasks, this should not be confused with being better at thinking, in any conscious sense. Digital technologies do not think, let alone think better than us, but they can do more and more things better than us, by processing increasing amounts of data and improving their performance by analysing their own output as input for future operations, through machine learning techniques. The most serious risk, therefore, is not some sci-fi appearance of malevolent ultraintelligence, but that we may misuse or underuse our digital technologies, to the detriment of a large percentage of humanity and the planet as a whole.
In addition, defining AI as smart agency means that some of the political and social challenges, traditionally associated with digital technology, are not our focus here. For example, because AI is fuelled by data, some of the challenges it poses are rooted in data governance, especially consent, ownership, privacy, surveillance, and trust. Each of these issues are immensely important, but in these cases AI cannot be easily decoupled from the broader questions of data management and use. AI4People, in contrast, will focus squarely on ethical challenges that are specific to AI as we define it here.