When Less Is More: Overcoming Algorithm Aversion with Biased Advice
Wed 09.07 11:00 - 11:30
- Behavioral and Management Sciences Seminar
-
loomfield 527
ABSTRACT
In an era of rapid technological advancement, decision-making processes across various domains increasingly integrate algorithmic systems. Despite algorithms often outperforming human experts, individuals frequently exhibit algorithm aversion- a tendency to prefer human judgment even when algorithmic recommendations are more accurate. Previous research finds that people who develop experience-based task expertise often provide biased advice, reflecting biases in decisions from experience. Based on these findings, the present study investigates people's preferences between human and algorithmic advice under different conditions. Specifically, we manipulate both the type of algorithmic recommendations (optimizing for average outcomes vs. better "most of the time") and feedback type (complete information about chosen and unchosen options vs. information only about chosen options). Participants complete sixty binary choice trials, receiving recommendations from both a human expert and an algorithm before each decision. We first find that when the human advisor provides partially biased advice, participants prefer the human over an optimal algorithm. However, when the algorithm provides even more biased advice favoring the option that performs better most of the time, participants prefer this biased algorithm to the human advisor. Nevertheless, this effect is found only under full feedback conditions. In partial feedback conditions, where it is more difficult to learn from experience, participants continue to prefer biased human experts, regardless of the algorithm's type. Hence, we identify both the conditions under which people learn to overcome algorithm aversion, and the conditions under which initial biases for algorithm aversion persist over time.

