Using Large Language Models to Predict Human Choice from Textual Description

יום רביעי 10.09 13:00 - 13:30

Abstract: Predicting human decision-making under risk and uncertainty is a long-standing challenge in cognitive science, economics, and AI. While prior research has focused on numerically described lotteries, real-world decisions often rely on textual descriptions. This study conducts the first large-scale exploration of human decision-making in such tasks using a large dataset of one-shot binary choices between textually described lotteries. We analyze the behavioral patterns of choice between the textually described lotteries and compare them to those that emerge under numeric descriptions. We also evaluate multiple computational approaches, including fine-tuning Large Language Models (LLMs), leveraging embeddings, and integrating behavioral theories of choice under risk. Our results show that fine-tuned LLMs, specifically GPT-4o outperforms hybrid models that incorporate behavioral theory, challenging established methods in numerical settings. These findings highlight fundamental differences in how textual and numerical information influence decision-making and underscore the need for new modeling strategies to bridge this gap.

Speaker

Eyal Marantz

Technion