« Back to Results

Forming Mental Models

Paper Session

Sunday, Jan. 5, 2025 1:00 PM - 3:00 PM (PST)

Hilton San Francisco Union Square, Union Square 15 and 16
Hosted By: American Economic Association
  • Chair: Aislinn Bohren, University of Pennsylvania

Over- and Underreaction to Information

Cuimin Ba
,
University of Pittsburgh

Abstract

This paper explores how cognitive constraints---namely, attention and processing capacity---interact with properties of the learning environment to determine how people react to information. In our model, people form a simplified mental representation of the environment via salience-channeled attention, then process information with cognitive imprecision. The model predicts overreaction to information when environments are complex, signals are noisy, information is surprising, or priors are concentrated on less salient states; it predicts underreaction when environments are simple, signals are precise, information is expected, or priors are concentrated on salient states. Results from a series of pre-registered experiments provide support for these predictions and direct evidence for the proposed cognitive mechanisms. We show that the two psychological mechanisms act as cognitive complements: their interaction is critical for explaining belief data and together they yield a highly complete model in terms of capturing explainable variation in belief-updating. Our theoretical and empirical results connect disparate findings in prior work: underreaction is typically found in laboratory studies, which feature simple learning settings, while overreaction is more prevalent in financial markets which feature greater complexity.

Extracting Models from Data Sets: An Experiment

Emmanuel Vespa
,
University of California-San Diego
Guillaume Fréchette
,
New York University
Sevgi Yuksel
,
New York University

Abstract

We experimentally study the types of mental models people form by learning from a set of observations. Specifically, we study the kinds of inferences individuals make regarding the statistical relationships among variables from examining data. While in the majority of cases, participants do this in a near-optimal way, in 39% of cases they do not. They exhibit two recurring errors: The first error, observed 13% of the time, involves misindentifying the underlying correlation structure and occurs most often in the presence of confounding variables, rising to 28% in such cases. The second error, the most common overall (observed 26% of the time), involves a failure to identify any correlations. On the one hand, the first error results in small losses in the most common occurrences; on the other hand, the second error is associated with randomizing behavior, creates substantial losses, and indicates significant difficulties in learning from the data. Importantly, participants display a high degree of consistency in the types of mistakes they make. Hence, different subjects draw opposing conclusions on what constitutes optimal behavior when presented with identical information and those differences are predictable.

Model Uncertainty, Disagreement, and Over-Precision: Theory and Evidence

Matthew Ryan Backus
,
University of California-Berkeley

Abstract

Constructing beliefs about the world (whether using formal statistical procedures or more
generally) usually requires simplifying assumptions. Unfortunately, it is often cognitively
costly or even impossible to consider how all possible assumptions would affect the relevant
beliefs. We develop a formal model of individuals who properly recognize uncertainty
conditional on the assumptions they make (“within-model uncertainty”), but do not fully
appreciate how their assumptions affect their beliefs (“across-model uncertainty”). Our main
result is that this leads to overprecision, in the sense of having too low a subjective variance
relative to the truth and to the true mean square error of predictions. Making assumptions
can also generate disagreement among those with the same information. Further, if assumptions
are drawn independently from the correct distribution, the amount of disagreement is
exactly equal to the amount of overprecision. We explore these predictions in an experimental
setting where we can cleanly vary within-model and across-model uncertainty. Consistent
with the theory, subjects respond approximately correctly to increases in within-model uncertainty,
but mostly ignore across-model uncertainty. Finally, we analyze observational data
from the Survey of Professional Forecasters, finding that forecasters are overprecise, and more
overprecise in problems with more disagreement.

Procedural Decision-Making in the Face of Complexity

Kirby Nielsen
,
California Institute of Technology

Abstract

A large body of work documents that complexity affects individuals’ choices, but the literature has remained mostly agnostic about why. We provide direct evidence that individuals use different choice processes for complex and simple decisions. We hypothesize that individuals resort to “procedures”—cognitively simpler choice processes that we characterize as being easier to describe to another person—as the complexity of the decision environment increases. We test our hypothesis using two experiments, one with choices over lotteries and one with choices over charities. We exogenously vary the complexity of the decision environment and measure the describability of choice processes by how well another individual can replicate the decision-maker’s choices given the decision-maker’s description of how they chose. We find strong support for our hypothesis: Both of our experiments show that individuals’ choice processes are more describable in complex choice environments, which we interpret as evidence that decision-making becomes more procedural as complexity increases. We show that procedural decision-makers choose more consistently and exhibit fewer dominance violations, though we remain agnostic about the causal effect of procedures on decision quality. Additional secondary evidence suggests that procedural decision-making is a choice simplification that reduces the cognitive costs of decision-making.
JEL Classifications
  • D8 - Information, Knowledge, and Uncertainty
  • D9 - Micro-Based Behavioral Economics