« Back to Results

Evaluating and Choosing Policies

Paper Session

Saturday, Jan. 3, 2026 2:30 PM - 4:30 PM (EST)

Philadelphia Marriott Downtown, Room 310
Hosted By: Econometric Society
  • Chair: Bo Honore, Princeton University

Policy Learning with Confidence

Victor Chernozhukov
,
Massachusetts Institute of Technology
Sokbae (Simon) Lee
,
Columbia University
Adam M. Rosen
,
Duke University
Liyang (Sophie) Sun
,
University College London and CEMFI

Abstract

This paper proposes a framework for selecting policies that maximize expected benefit in the presence of estimation uncertainty, by controlling for estimation risk and incorporating risk aversion. The proposed method explicitly balances the size of the estimated benefit against the uncertainty inherent in its estimation, ensuring that chosen policies meet a reporting guarantee, namely that the actual benefit of the implemented policy is guaranteed not to fall below the reported estimate with a pre-specified confidence level. This approach applies to a variety of settings, including the selection of policy rules that allocate individuals to treatments based on observed characteristics, using both experimental and non-experimental data; and the allocation of limited budgets among competing social programs; as well as many others. Across these applications, the framework offers a principled and robust method for making data-driven policy choices under uncertainty. In broader terms, it focuses on policies that are on the efficient decision frontier, describing policies that offer maximum estimated benefit for a given acceptable level of estimation risk.

Experimental Design for Policy Choice

Samuel David Higbee
,
University of Chicago

Abstract

We study how to design experiments for the objective of
choosing optimal policies.
An experimenter wants to choose a policy to maximize welfare
subject to budget or other policy constraints.
The effects of counterfactual policies are described by a
structural econometric model governed by an unknown parameter.
The experimenter has access to some pilot data,
and has the opportunity to collect additional data through an experiment.
The joint experimental design and policy choice problem is a
dynamic optimization problem with a very high-dimensional state space,
since the chosen policy depends on the realized data.
We propose a low-dimensional approximation to the solution
and show it is asymptotically optimal under Bayes expected welfare.
The method applies to policies allocating discrete
as well as continuous treatments,
such as cash transfers, prices, or tax credits,
which may be targeted on the basis of covariates.
We demonstrate the method using the conditional cash transfer program
Progresa,
showing how to design an experiment to help choose a policy aimed at
increasing graduation rates and reducing gender disparities in education.
Compared to the original Progresa experiment,
the optimal experiment requires 60\% fewer observations
to obtain equally effective policies.

Evaluating Counterfactuals using Instruments

Michal Kolesar
,
Princeton University
José Luis Montiel Olea
,
Cornell University
Jonathan Roth
,
Brown University

Abstract

In settings with instrumental variables, the TSLS estimator is the most popular way of summarizing causal evidence. Yet in many settings, the instrument monotonicity assumption needed for its causal interpretation is refuted. A prominent example are designs using the (quasi-)random assignment of defendants to judges as an instrument for incarceration. But ultimately, we may not be interested in the TSLS estimand itself, but rather in the impact of some counterfactual policy intervention (e.g. an encouragement to release more defendants). In this paper, we derive tractable sharp bounds
on the impact of such counterfactual policies under reasonable sets of assumptions. We show that for a variety of common policy exercises, the bounds do not depend
on whether one imposes instrument monotonicity, and thus one can drop this often tenuous assumption without loss of information. We explore other restrictions that can
help to tighten the bounds, including the policy invariance assumption commonly used in applications of the marginal treatment effects framework and its relaxations. We
illustrate the usefulness of this approach in an application involving the quasi-random assignment of prosecutors to defendants in Massachusetts.

Discussant(s)
Toru Kitagawa
,
Brown University
Karun Adusumilli
,
University of Pennsylvania
Alexander Torgovitsky
,
University of Chicago
JEL Classifications
  • C26 - Instrumental Variables (IV) Estimation
  • C21 - Cross-Sectional Models; Spatial Models; Treatment Effect Models; Quantile Regressions