« Back to Results

What Can AI Do in Economics?

Paper Session

Sunday, Jan. 9, 2022 10:00 AM - 12:00 PM (EST)

Hosted By: American Economic Association
  • Chair: Lars Peter Hansen, University of Chicago

Deep Neural Networks for Estimation and Inference

Max H. Farrell
,
University of Chicago
Tengyuan Liang
,
University of Chicago
Sanjog Misra
,
University of Chicago

Abstract

We study deep neural networks and their use in semiparametric inference. We establish novel rates of convergence for deep feedforward neural nets. Our new rates are sufficiently fast (in some cases minimax optimal) to allow us to establish valid second-step inference after first-step estimation with deep learning, a result also new to the literature. Our estimation rates and semiparametric inference results handle the current standard architecture: fully connected feedforward neural networks (multi-layer perceptrons), with the now-common rectified linear unit activation function and a depth explicitly diverging with the sample size. We discuss other architectures as well, including fixed-width, very deep networks. We establish nonasymptotic bounds for these deep nets for a general class of nonparametric regression-type loss functions, which includes as special cases least squares, logistic regression, and other generalized linear models. We then apply our theory to develop semiparametric inference, focusing on causal parameters for concreteness, such as treatment effects, expected welfare, and decomposition effects. Inference in many other semiparametric contexts can be readily obtained. We demonstrate the effectiveness of deep learning with a Monte Carlo analysis and an empirical application to direct mail marketing.

Inference on Weighted Average Value Function with High-Dimensional State Space

Victor Chernozhukov
,
Massachusetts Institute of Technology
Whitney Newey
,
Massachusetts Institute of Technology
Vira Semenova
,
University of California-Berkeley

Abstract

This paper gives a consistent, asymptotically normal estimator of the expected value function when the state space is high-dimensional and the first-stage nuisance functions are estimated by modern machine learning tools. First, we show that the value function is orthogonal to the conditional choice probability, therefore, this nuisance function needs to be estimated only at N^(-1/4) rate. Second, we give a correction term for the transition density of the state variable. The resulting orthogonal moment is robust to misspecification of the transition density and does not require this nuisance function to be consistently estimated. Third, we generalize this result by considering the weighted expected value. In this case, the orthogonal moment is doubly robust in the transition density and additional second-stage nuisance functions entering the correction term. We complete the asymptotic theory by providing bounds on second-order asymptotic terms.

Deep Learning Classification: Modeling Discrete Labor Choice

Lilia Maliar
,
City University of New York-Graduate Center
Serguei Maliar
,
Santa Clara University

Abstract

We introduce a deep learning classification (DLC) method for analyzing equilibrium in discrete-continuos choice dynamic models. As an illustration, we solve Krusell and Smith's (1998) heterogeneous-agent model with incomplete markets, borrowing constraint and indivisible labor choice. The novel feature of our analysis is that we construct state-contingent discontinuous decision functions that tell us when the agent switches from one employment state to another. We use deep learning not only to characterize the discrete indivisible labor choice but also to perform model reduction and to deal with multicollinearity. Our TensorFlow-based implementation of DLC is tractable in models with thousands of state variables.

Exploiting Symmetry in High-Dimensional Dynamic Programming

Mahdi Ebrahimi Kahou
,
University of British Columbia
Jesús Fernández-Villaverde
,
University of Pennsylvania
Jesse Perla
,
University of British Columbia
Arnav Sood
,
University of British Columbia

Abstract

We propose a new method for solving high-dimensional dynamic programming
problems and recursive competitive equilibria with a very large (but
finite) number of heterogeneous agents using deep learning. The "curse
of dimensionality'' is avoided due to four complementary techniques: (1)
exploiting symmetry in the approximate law of motion and the value
function when designing deep learning approximations; (2) constructing a
concentration of measure to calculate high-dimensional expectations
using only a single Monte-Carlo draw for all idiosyncratic shocks; (3)
sampling methods to ensure the model fits along manifolds of interest;
and (4) using generalization solution of overparameterized deep learning
models to avoid calculating the stationary distribution. As an
application, we solve a global solution of a multi-agent version of the
Lucas and Prescott (1971) classic model of "investment under
uncertainty" using deep neural networks and arbitrary but symmetric
pricing functions. Benchmarking against a linear-quadratic Gaussian case
solved with classical control methods, we solve the equilibrium orders
of magnitude faster, even in that particular case of certainty
equivalence. Finally, we describe how our approach applies to a large
class of models in economics.

Discussant(s)
Andrii Babii
,
University of North Carolina-Chapel Hill
Lihua Lei
,
Stanford University
John Rust
,
Georgetown University
Fedor Iskhakov
,
Australian National University
JEL Classifications
  • C6 - Mathematical Methods; Programming Models; Mathematical and Simulation Modeling
  • C1 - Econometric and Statistical Methods and Methodology: General