BYLINE: JT Godfrey

News —  From detecting cancer in medical imaging to recommending additional lesson plans for students in K-12 education, algorithm technologies have become crucial across many different industries. Whether algorithms lead to improved efficiency is another question.

Despite the widespread adoption of algorithm technologies, research on “algorithm aversion” shows that workers are hesitant to use them in their everyday work in many contexts. In their new paper, Snyder, Keppler, and Leider explored how algorithm reliance and avoidance play out in customer-facing services.

Design vs. reality

One inspiration for the research was Snyder’s pre-PhD experience designing algorithm tools. She shared that while designing algorithm tools, there is often optimism about efficiency and adoption in the workplace. However, the design process does not often account for the human elements of implementation.

We often hear people talk about ideal visions for algorithms. One vision is that these tools will help people make really good decisions, quickly. But often this vision is based on the algorithm’s performance in a vacuum; it’s abstracted from the context where people are making those decisions

Clare Snyder, PhD ’25

Snyder, Keppler, and Leider examined human-algorithm interaction in a study of over 400 participants and found that algorithm aversion could affect decisions in other ways than previously explored. Particularly, they found that workers were averse to using algorithm-generated recommendations to make fast decisions.

Going against conventional expectations, workers are not always faster when an algorithm is implemented. The efficiency gains of adding algorithms to worker-customer interactions depend on how quickly workers adopt algorithm-generated suggestions, which they may do quickly or slowly depending on how well they trust the algorithm’s accuracy and workload pressure.

A lesson for decision-makers

The study provides industry leaders and tech developers with essential considerations for designing and implementing algorithms into their everyday operations.

“You may spend a lot of time developing an algorithm for a lofty goal. However, workers have to trust an algorithm to take its advice quickly. You can't expect efficiency until they’ve had time to get the information about the algorithm's good performance. Even then, the other conditions, such as workload and time pressure, have to be right,” shared Snyder.

The team’s key findings show that companies need to create clear goals before implementing algorithm technology.

“Our study shows that it is important to remember that workers can use algorithms quickly—or slowly: they can default to an algorithm's recommendation right away or spend a long time considering it,” shared Keppler. “Organizational leaders need to think about which of those actions is preferable. Is the algorithm primarily there to help workers speed up or to improve worker accuracy?”

Implementing findings in a ChatGPT world

The team shared that this research is just the beginning of exploring the interaction between workers and machine learning algorithms. Their hope is that more field and observational research into how workers use algorithms will ultimately improve how we interact with algorithms in our everyday lives.

Particularly with the proliferation of popular tools such as ChatGPT, human reliance on algorithms will only get more complicated.

Generative AI is interesting because a lot of people are exploring and learning about the technology through experimentation. We can build trust with generative AI, or not, kind of like we build trust with other people: slowly, over time, by learning what it can and cannot do well. While this, in theory, might be a good thing for reducing aversion, generative AI technologies are constantly changing and often inconsistent, making that trust hard to build—at least right now.

Samantha Keppler, assistant professor of technology and operations

Their paper, “,” is forthcoming in Management Science