« Back to Results

Big Data in Time Series: Factor Models

Paper Session

Saturday, Jan. 6, 2018 10:15 AM - 12:15 PM

Marriott Philadelphia Downtown, Meeting Room 406
Hosted By: Econometric Society
  • Chair: Serena Ng, Columbia University

Constrained Principal Components Estimation of Large Approximate Factor Models

Rachida Ouysse
,
University of New South Wales

Abstract

Principal components (PC) are fundamentally feasible for the estimation of
large factor models because consistency can be achieved for any path of the panel
dimensions. The PC method is however inecient under cross-sectional dependence
with unknown structure. The approximate factor model of Chamberlain
and Rothschild [1983] imposes a bound on the amount of dependence in the error
term. This article proposes a constrained principal components (Cn-PC) estimator
that incorporates this restriction as external information in the PC analysis
of the data. This estimator is computationally tractable. It doesn't require estimating
large covariance matrices, and is obtained as PC of a regularized form
of the data covariance matrix. The paper develops a convergence rate for the
factor estimates and establishes asymptotic normality. In a Monte Carlo study,
we nd that the Cn-PC estimators have good small sample properties in terms
of estimation and forecasting performances when compared to the regular PC
and to the generalized PC method (Choi [2012]).

Factor Models with Many Assets: Strong Factors, Weak Factors, and the Two-pass Procedure

Stanislav Anatolyev
,
CERGE-EI and New Economic School
Anna Mikusheva
,
Massachusetts Institute of Technology

Abstract

This paper re-examines the problem of estimation of risk premia in
factor pricing models. A typical feature of data used in the empirical
literature to estimate such models is the presence of weak factors that are
priced and, at the same time, the presence of unaccounted strong
cross-sectional dependence in the errors. Another feature of typically used
data is (moderately) high cross sectional dimensionality. Using an
asymptotic framework where the number of asset/portfolios grows
proportionately with the time span of the data while the risk exposures of
weak factors are local-to-zero, we show that in such circumstances the
conventional two pass estimation procedure delivers inconsistent estimates
of the risk premia. We propose a modified two-pass procedure based on sample
splitting instrumental variables estimation at the second pass. The proposed
estimator of risk premia is robust to the presence of strong unaccounted
cross-sectional error dependence, as well as to the presence of included
factors that are priced but weak. We derive the many asset weak factor
asymptotic distribution of the proposed estimator, show how to construct its
standard errors, verify its performance in simulations, and apply it to
often-used datasets from existing empirical studies.

Common Factors, Trends, and Cycles in Large Datasets

Matteo Barigozzi
,
London School of Economics
Matteo Luciani
,
Federal Reserve Board

Abstract

This paper considers an approximate dynamic factor model for a large panel of time
series possibly sharing stochastic trends, with the aim of disentangling long-run from
short-run co-movements. First, we propose a new Quasi Maximum Likelihood estimator
of the model based on the Kalman filter and the Expectation Maximisation (EM)
algorithm for non-stationary data. This estimator is shown to be more efficient than
traditional estimators based on principal component analysis of first differences and their
integration. Second, we show how to separate trends and cycles in the estimated factors
by using a non-parametric decomposition based on eigenanalysis of a matrix similar to
the long-run covariance matrix. Third, we employ our methodology to estimate aggregate
real output, or Gross Domestic Output (GDO), and the output gap on a panel of US
quarterly macroeconomic indicators. Specifically, we first derive an estimate of GDO as
that part of GDP/GDI that is driven only by the common shocks, and then, by applying
our trend-cycle decomposition to the factors driving GDO, we produce an estimate of the
output gap.

Economic Predictions with Big Data: The Illusion of Sparsity

Domenico Giannone
,
Federal Reserve Bank of New York
Michele Lenza
,
European Central Bank
Giorgio E. Primiceri
,
Northwestern University

Abstract

We compare sparse and dense representations of predictive models in macroeconomics, microeconomics and finance. To deal with a large number of possible predictors, we specify a “spike-and-slab” prior that allows for both variable selection and shrinkage. The posterior distribution does not typically concentrate on a single sparse or dense model, but on a wide set of models, with a heterogeneous pattern of sparsity.
JEL Classifications
  • C32 - Time-Series Models; Dynamic Quantile Regressions; Dynamic Treatment Effect Models; Diffusion Processes; State Space Models
  • C5 - Econometric Modeling