Large Language Models in Experimental Economics
Paper Session
Monday, Jan. 5, 2026 10:15 AM - 12:15 PM (EST)
- Chair: Colin Camerer, California Institute of Technology
AI Agents Can Enable Superior Market Designs
Abstract
Many theoretically appealing market designs are under-utilized because they demand preference data that humans find costly to provide. This paper demonstrates how large language models (LLMs) can effectively elicit such data from natural language descriptions. In our experiment, human subjects provide free-text descriptions of their tastes over potential roles they could be assigned. An LLM converts these descriptions into cardinal utilities that capture participants’ preferences. We use these utilities and participants’ stated preferences to facilitate three allocation mechanisms---random serial dictatorship, Hylland-Zeckhauser, and a conventional job application type game. A follow-up experiment confirms that participants themselves prefer LLM-generated matches over simpler alternatives under high congestion. These findings suggest that LLM-proxied preference elicitation can enable superior market designs where they would otherwise be impractical to implement.LLMs Can Model Non-WEIRD Populations: Experiments with Synthetic Cultural Agents
Abstract
Despite its importance, studying economic behavior across diverse, non-WEIRD (Western, Educated, Industrialized, Rich, and Democratic) populations presents significant challenges. We address this issue by introducing a novel methodology that uses Large Language Models (LLMs) to create synthetic cultural agents (SCAs) representing these populations. We subject these SCAs to classic behavioral experiments, including the dictator and ultimatum games. Our results demonstrate substantial cross-cultural variability in experimental behavior. Notably, for populations with available data, SCAs’ behaviors qualitatively resemble those of real human subjects. For unstudied populations, our method can generate novel, testable hypotheses about economic behavior. By integrating AI into experimental economics, this approach offers a proofof-concept for an effective and ethical method to do exploratory analysis, pilot experiments, and refine protocols for hard-to-reach populations. Our study provides a new tool for cross-cultural economic studies and highlights the potential of LLMs to advance experimental and behavioral research.An Application of Automatic Prompt Optimization: Experimental Tests of Framing Effects
Abstract
We use LLM as an “instruction searching” tool. In this paper, we apply the automatic prompt optimization methods to improve the experimental instructions in stag hunt games in a way that systematically induces more coordination on payoff-dominant equilibrium play in LLM simulations. We then test whether the framing effects predicted by LLMs carry over to human subjects in behavioral experiments.Discussant(s)
Colin Camerer
,
California Institute of Technology
Colin Camerer
,
California Institute of Technology
Colin Camerer
,
California Institute of Technology
Colin Camerer
,
California Institute of Technology
JEL Classifications
- D9 - Micro-Based Behavioral Economics
- C6 - Mathematical Methods; Programming Models; Mathematical and Simulation Modeling