The behavioural revolution in economics was triggered by a simple, haunting question: what if people don’t act rationally? This same question now vexes the technology field. In the online world, once expected to be a place of ready information and easy collaboration, lies and hate can spread faster than truth and kindness. Corporate systems, too, elicit irrational behaviour. AI stands at the crossroads of the behavioural question, with the potential to make matters worse or to elicit better outcomes from us. Bob Suh tells Harvard Business Review how we can boost AI’s emotional quotient (EQ) by training algorithms to mimic the way people behave in constructive relationships:
Whether or not we care to admit it, we build relationships with apps. And apps, like people, can elicit both positive and negative behaviours from us. When people with high EQ interact with us, they learn our patterns, empathize with our motivations, and carefully weigh their responses. They decide to ignore, challenge, or encourage us depending on how they anticipate we will react.
AI can be trained to do the same thing. Why? Because behaviours are more predictable than we like to think. The $70 billion weight-loss industry thrives because diet companies know that most people regain lost weight. The $40 billion casino industry profits from gamblers’ illogical hope of a comeback. Credit card companies know it is hard for people to break their spending habits.
While it’s still quite early, the fields of behavioural science and machine learning already provide some promising techniques for creating higher-EQ AI that organizations are putting to work to produce better outcomes. Those techniques include:
Noting pattern breaks and nudging. People who know you can easily tell when you are breaking a pattern and react accordingly. For example, a friend may notice that you suddenly changed your routine and ask you why. The Bank of America online bill paying system similarly notes pattern breaks to prevent user keying errors. The system remembers the pattern of payments you’ve made in the past and posts an alert if you substantially increase your payment to a vendor.
Encouraging self-awareness with benchmarks. Bluntly telling individuals they are performing poorly often backfires, provoking defensiveness rather than greater effort. A more diplomatic method simply allows people to see how they compare with others. For instance, a major technology firm used AI to generate more accurate sales forecasts than the sales team did. To induce the team to course-correct, the system provides each team member with personalized visualizations showing how their forecasts differ from the AI forecast. A simple nudge then inquires why this might be the case. The team member can provide a rational explanation, avoid providing feedback, or claim that the AI is incorrect. The AI learns about the substance and timing of the individual’s reaction, weighs it against the gap in the two forecasts, and can choose an appropriate second-order nudge.
To read the full article, click here.