Evaluating and Choosing Policies
Paper Session
Saturday, Jan. 3, 2026 2:30 PM - 4:30 PM (EST)
- Chair: Bo Honore, Princeton University
Experimental Design for Policy Choice
Abstract
We study how to design experiments for the objective ofchoosing optimal policies.
An experimenter wants to choose a policy to maximize welfare
subject to budget or other policy constraints.
The effects of counterfactual policies are described by a
structural econometric model governed by an unknown parameter.
The experimenter has access to some pilot data,
and has the opportunity to collect additional data through an experiment.
The joint experimental design and policy choice problem is a
dynamic optimization problem with a very high-dimensional state space,
since the chosen policy depends on the realized data.
We propose a low-dimensional approximation to the solution
and show it is asymptotically optimal under Bayes expected welfare.
The method applies to policies allocating discrete
as well as continuous treatments,
such as cash transfers, prices, or tax credits,
which may be targeted on the basis of covariates.
We demonstrate the method using the conditional cash transfer program
Progresa,
showing how to design an experiment to help choose a policy aimed at
increasing graduation rates and reducing gender disparities in education.
Compared to the original Progresa experiment,
the optimal experiment requires 60\% fewer observations
to obtain equally effective policies.
Evaluating Counterfactuals using Instruments
Abstract
In settings with instrumental variables, the TSLS estimator is the most popular way of summarizing causal evidence. Yet in many settings, the instrument monotonicity assumption needed for its causal interpretation is refuted. A prominent example are designs using the (quasi-)random assignment of defendants to judges as an instrument for incarceration. But ultimately, we may not be interested in the TSLS estimand itself, but rather in the impact of some counterfactual policy intervention (e.g. an encouragement to release more defendants). In this paper, we derive tractable sharp boundson the impact of such counterfactual policies under reasonable sets of assumptions. We show that for a variety of common policy exercises, the bounds do not depend
on whether one imposes instrument monotonicity, and thus one can drop this often tenuous assumption without loss of information. We explore other restrictions that can
help to tighten the bounds, including the policy invariance assumption commonly used in applications of the marginal treatment effects framework and its relaxations. We
illustrate the usefulness of this approach in an application involving the quasi-random assignment of prosecutors to defendants in Massachusetts.
Discussant(s)
Toru Kitagawa
,
Brown University
Karun Adusumilli
,
University of Pennsylvania
Alexander Torgovitsky
,
University of Chicago
JEL Classifications
- C26 - Instrumental Variables (IV) Estimation
- C21 - Cross-Sectional Models; Spatial Models; Treatment Effect Models; Quantile Regressions