The use of Facial Analysis AI in job interviews

Artificial intelligence and facial analysis software is becoming commonplace in job interviews. The technology, developed by US company HireVue, analyses the language and tone of a candidate’s voice and records their facial expressions as they are videoed answering identical questions.

Certainly there are significant benefits to be had from this. HireVue says it speeds up the hiring process by 90% thanks to the speed of information processing. But there are important risks we should be wary of when outsourcing job interviews to AI, according to The Conversation.

The AI is built on algorithms that assess applicants against its database of about 25,000 pieces of facial and linguistic information. These are compiled from previous interviews of “successful hires” – those who have gone on to be good at the job. The 350 linguistic elements include criteria like a candidate’s tone of voice, their use of passive or active words, sentence length and the speed they talk. The thousands of facial features analysed include brow furrowing, brow raising, the amount eyes widen or close, lip tightening, chin raising and smiling.

The fundamental issue with this, as is often pointed out by critics of AI, is that this technology is not born in a perfect society. It is created within our existing society, marked by a whole range of different kinds of biases, prejudices, inequalities and discrimination. The data on which algorithms “learn” to judge candidates contains these existing sets of beliefs.

As UCLA professor, Safya Noble, demonstrates in her book Algorithms of Oppression, a few simple Google searches shows this happening. For example, when you search the term “professor style”, Google Images returns exclusively middle-aged white men. You get similar results for a “successful manager” search. By contrast, a search for “housekeeping” returns pictures of women.

This reflects how algorithms have “learnt” that professors and managers are mostly white men, while those who do housekeeping are women. And by delivering these results algorithms necessarily contribute to the consolidation, perpetuation and potentially even amplification of existing beliefs and biases. For this very reason we should question the intelligence of AI. The solutions it provides are necessarily conservative, leaving little room for innovation and social progress.

Atomium-EISMD