« Back to Results

Advances in Behavioral Economics

Paper Session

Saturday, Jan. 3, 2026 2:30 PM - 4:30 PM (EST)

Philadelphia Marriott Downtown, Room 308
Hosted By: Econometric Society
  • Chair: Hayong Yun, Michigan State University

Numbers Tell, Words Sell

Michael Thaler
,
University College London
Mattie Toma
,
University of Warwick
Victor Yaneng Wang
,
Massachusetts Institute of Technology

Abstract

When communicating numeric estimates, experts often choose between using numbers or natural language. We run two experiments to analyze whether experts strategically use language to persuade. In Study 1, senders in the general public communicate probabilities of abstract events to receivers; in Study 2, academic researchers communicate findings from research papers to policymakers. Incentives to persuade increase the likelihood of using language rather than numbers by 25–29 percentage points, and receivers are effectively persuaded. Experts slant language more than numbers, particularly when they prefer language. Our findings suggest that experts leverage the imprecision of language to excuse communicating slanted messages.

The Interaction of Memory Imperfections

Marcel Quint
,
LMU Munich

Abstract

While interaction effects play a prominent role in classical economics, there is not much known about the way how different behavioral biases interact. This paper sheds some light on this question in the domain of memory. We relax the pre-existing dichotomy of the two main recall biases that have been identified in the literature -- motivated and similarity-based recall -- and investigate whether they complement or substitute each other. We propose a ''System 1-System 2'' model of recall that accommodates complements or substitutes depending on the relative importance of both systems and the resulting cognitive effort in memory. Our model predicts that the two recall biases become more complementary with an increasing importance of unconscious System 1 over conscious System 2 and hence lower exerted effort. Intuitively, agents are more able to exploit similarity to self-servingly bias their recall when they spend less cognitive resources on it. We confirm this (and other) predictions of our model in a lab experiment. We find that the two recall biases are complements, and especially so for low effort levels. Furthermore, we find that the interaction of both recall biases determines subjects' belief formation and actions well beyond non-Bayesian updating.

Bayesian Adaptive Choice Experiments

Fernando Payro
,
Universitat Autònoma de Barcelona
Linh Thùy Tô
,
Boston University
Neil Thakral
,
Brown University

Abstract

We propose the use of a dynamic choice experiment method, which we call Bayesian Adaptive Choice Experiment (BACE), to elicit preferences efficiently. BACE generates an adaptive sequence of menus from which subjects will make choices. Each menu is optimally chosen, according to the mutual information criterion, using the information provided by the subjects’ previous choices. We provide sufficient conditions under which BACE achieves convergence and show that its convergence rate significantly improves upon existing discrete choice methods with randomly generated menus. We show that it achieves the highest possible rate of convergence whenever preferences are deterministic. Beyond efficiency gains, BACE addresses a bias in estimating population-level average preference parameters stemming from using combined data across individuals when individuals differ in their tendency to be inconsistent in their choices. Given that BACE requires the calculation of a Bayesian posterior as well as the solution to a non-trivial optimization problem, several computational challenges arise. We address such challenges by using Bayesian Monte Carlo techniques and provide a package for researchers to employ. The separation between a front-end survey interface and a back-end computational server allows the BACE package to be portable for research designs in a wide range of settings.

AI as Decision-Maker: Risk Preferences of LLMs

Shumiao Ouyang
,
University of Oxford
Hayong Yun
,
Michigan State University
Xingjian Zheng
,
Shanghai Jiao Tong University

Abstract

Large Language Models (LLMs) exhibit surprisingly diverse risk preferences when acting as AI decision makers, a crucial characteristic whose origins remain poorly understood despite their expanding economic roles. We analyze 50 LLMs using behavioral tasks, finding stable but diverse risk profiles. Alignment tuning for harmlessness, helpfulness, and honesty significantly increases risk aversion, causally increasing risk aversion confirmed via comparative difference analysis: a ten percent ethics increase cuts risk appetite two to eight percent. This induced caution persists against prompts and affects economic forecasts. Alignment enhances safety but may also suppress valuable risk taking, revealing a tradeoff risking suboptimal economic outcomes.
JEL Classifications
  • D9 - Micro-Based Behavioral Economics