What Can AI Do in Economics?
Sunday, Jan. 9, 2022 10:00 AM - 12:00 PM (EST)
- Chair: Lars Peter Hansen, University of Chicago
Inference on Weighted Average Value Function with High-Dimensional State Space
AbstractThis paper gives a consistent, asymptotically normal estimator of the expected value function when the state space is high-dimensional and the first-stage nuisance functions are estimated by modern machine learning tools. First, we show that the value function is orthogonal to the conditional choice probability, therefore, this nuisance function needs to be estimated only at N^(-1/4) rate. Second, we give a correction term for the transition density of the state variable. The resulting orthogonal moment is robust to misspecification of the transition density and does not require this nuisance function to be consistently estimated. Third, we generalize this result by considering the weighted expected value. In this case, the orthogonal moment is doubly robust in the transition density and additional second-stage nuisance functions entering the correction term. We complete the asymptotic theory by providing bounds on second-order asymptotic terms.
Deep Learning Classification: Modeling Discrete Labor Choice
AbstractWe introduce a deep learning classification (DLC) method for analyzing equilibrium in discrete-continuos choice dynamic models. As an illustration, we solve Krusell and Smith's (1998) heterogeneous-agent model with incomplete markets, borrowing constraint and indivisible labor choice. The novel feature of our analysis is that we construct state-contingent discontinuous decision functions that tell us when the agent switches from one employment state to another. We use deep learning not only to characterize the discrete indivisible labor choice but also to perform model reduction and to deal with multicollinearity. Our TensorFlow-based implementation of DLC is tractable in models with thousands of state variables.
Exploiting Symmetry in High-Dimensional Dynamic Programming
AbstractWe propose a new method for solving high-dimensional dynamic programming
problems and recursive competitive equilibria with a very large (but
finite) number of heterogeneous agents using deep learning. The "curse
of dimensionality'' is avoided due to four complementary techniques: (1)
exploiting symmetry in the approximate law of motion and the value
function when designing deep learning approximations; (2) constructing a
concentration of measure to calculate high-dimensional expectations
using only a single Monte-Carlo draw for all idiosyncratic shocks; (3)
sampling methods to ensure the model fits along manifolds of interest;
and (4) using generalization solution of overparameterized deep learning
models to avoid calculating the stationary distribution. As an
application, we solve a global solution of a multi-agent version of the
Lucas and Prescott (1971) classic model of "investment under
uncertainty" using deep neural networks and arbitrary but symmetric
pricing functions. Benchmarking against a linear-quadratic Gaussian case
solved with classical control methods, we solve the equilibrium orders
of magnitude faster, even in that particular case of certainty
equivalence. Finally, we describe how our approach applies to a large
class of models in economics.
- C6 - Mathematical Methods; Programming Models; Mathematical and Simulation Modeling
- C1 - Econometric and Statistical Methods and Methodology: General