« Back to Results

Econometrics of Experiments

Paper Session

Sunday, Jan. 5, 2025 1:00 PM - 3:00 PM (PST)

Hilton San Francisco Union Square, Union Square 17 and 18
Hosted By: American Economic Association
  • Chair: Colin Cameron, University of California-Davis

A Contextual Bandit Pipeline with Data-Driven Error Estimation

Sanath Krishnamurthy
,
Stanford University
Susan Athey
,
Stanford University
Emma Brunskill
,
Stanford University

Abstract

Contextual bandits are algorithms for designing adaptive experiments. Participants arrive sequentially, and treatment assignment probabilities are specified by a policy that depends on participant characteristics. Periodically, a new policy is selected based on past data. Our algorithm employs a novel data-driven procedure for selecting the functional form of the outcome models and quantifying the error estimates that determine the balance between exploration and exploitation. By ensuring that the variance of an estimate of the value of the optimal policy is sufficiently low when evaluated using data collected by the algorithm, we achieve state-of-the-art bounds on performance.

Counting Defiers in Health Care with a Design-Based Likelihood for the Joint Distribution of Potential Outcomes

Neil Christy
,
University of Michigan
Amanda Kowalski
,
University of Michigan

Abstract

We present a design-based model of a randomized experiment in which the observed outcomes are informative about the joint distribution of potential outcomes within the experimental sample. We derive a likelihood function that maintains curvature with respect to the joint distribution of potential outcomes, even when holding the marginal distributions of potential outcomes constant -- curvature that is not maintained in a sampling-based likelihood that imposes a large sample assumption. Our proposed decision rule guesses the joint distribution of potential outcomes in the sample as the distribution that maximizes the likelihood. We show that this decision rule is Bayes optimal under a uniform prior. Our optimal decision rule differs from and significantly outperforms a "monotonicity" decision rule that assumes no defiers or no compliers. In sample sizes ranging from 2 to 40, we show that the Bayes expected utility of the optimal rule increases relative to the monotonicity rule as the sample size increases. In two experiments in health care, we show that the joint distribution of potential outcomes that maximizes the likelihood need not include compliers even when the average outcome in the intervention group exceeds the average outcome in the control group, and that the maximizer of the likelihood may include both compliers and defiers, even when the average intervention effect is large and statistically significant.

Dual Identification in Selection and Treatment Choice

Vira Semenova
,
University of California-Berkeley
David Bruns-Smith
,
University of California-Berkeley

Abstract

We develop a covariate-assisted approach to partially identified parameters that are solutions to an under-identified system of linear equations with known coefficients. Examples include bounds on mean potential outcomes in choice-theoretic model of IV (e.g., (Heckman and Pinto, 2018)), random utility models. The boundary (i.e., support function) of the proposed identified set is represented as an average of intersection of regression functions, aggregated over covariate distribution. Extending the results from linear to linear-fractional programs, we extend Lee, 2009 bounds to accommodate monotonicity failure.

Testing Mechanisms

Soonwoo Kwon
,
Brown University
Jonathan Roth
,
Brown University

Abstract

Economists are often interested in the mechanisms by which a particular treatment affects an outcome. This paper develops tests for the ``sharp null of full mediation'' that the treatment D operates on the outcome Y only through a particular conjectured mechanism (or set of mechanisms) M. A key observation is that if D is randomly assigned and has a monotone effect on M, then D is a valid instrumental variable for the local average treatment effect (LATE) of M on Y. Existing tools for testing the validity of the LATE assumptions can thus be used to test the sharp null of full mediation when M and D are binary. We develop a more general framework that allows one to test whether the effect of D on Y is fully explained by a potentially multi-valued and multi-dimensional set of mechanisms M, allowing for relaxations of the monotonicity assumption. We further provide methods for lower-bounding the size of the alternative mechanisms when the sharp null is rejected. An advantage of our approach relative to existing tools for mediation analysis is that it does not require stringent assumptions about how M is assigned; on the other hand, our approach helps to answer different questions than traditional mediation analysis by focusing on the sharp null rather than estimating average direct and indirect effects. We illustrate the usefulness of the testable implications in two empirical applications.

Discussant(s)
Toru Kitagawa
,
Brown University
Aleksey Tetenov
,
University of Geneva
Soonwoo Kwon
,
Brown University
Yuanyuan Wan
,
University of Toronto
JEL Classifications
  • C9 - Design of Experiments
  • H1 - Structure and Scope of Government