« Back to Results

George Judge: 100 and Counting. Econometrics in Agricultural and Applied Economics

Paper Session

Sunday, Jan. 4, 2026 8:00 AM - 10:00 AM (EST)

Philadelphia Marriott Downtown, Room 408
Hosted By: Agricultural and Applied Economics Association
  • Chair: Jill J. McCluskey, Washington State University

George Judge’s Contributions to Econometrics in Agricultural and Applied Economics

Gordon Rausser
,
University of California-Berkeley
Sofia Villas-Boas
,
University of California-Berkeley

Abstract

This presentation discusses George Judge’s contributions, which have significantly impacted the field of econometrics through his innovative research, influential textbooks, and role as a mentor and educator.

George Garrett Judge’s body of work constitutes one of the most intellectually coherent and forward-looking research programs in modern econometrics, spanning Stein-rule estimation, spatial equilibrium modeling, and information-theoretic inference. What appears at first to be a diverse set of contributions is in fact organized around a single foundational question: How can economists recover reliable information about complex systems from noisy, incomplete, and imperfect data? Judge approached this challenge by advancing new estimators, reformulating spatial general equilibrium, and ultimately developing an entropy-based framework that integrates information theory, statistical mechanics, and computational methods. His vision redefines econometrics for an information-rich but uncertainty-dominated world, emphasizing epistemological humility, out-of-sample predictive performance, and the dynamic recovery of information over static parameter estimation. Across more than 150 articles, 16 books, and decades of mentorship, Judge reshaped agricultural economics, applied economics, and econometrics more broadly.

Testing Parametric Distribution Family Assumptions via Differences in Differential Entropy

Ron C. Mittelhammer
,
Washington State Univesity
George Judge
,
University of California-Berkeley
Miguel Henry
,
OnPoint Analytics

Abstract

We present a widely applicable statistical testing procedure for assessing the validity of hypotheses regarding which parametric distribution family generated a random sample of data. The test provides a unified framework for conducting such tests across a wide scope of distribution families, and has asymptotic validity derived directly from maximum likelihood, bootstrapping, and kernel density estimator (KDE) principles. The test is straightforward to implement, interpret, and can be efficiently calculated. The test is based on the concept of maximum entropy distributions and the degree of divergence between two different sample estimates of differential entropy, one being a direct MLE estimate under the null hypothesis and the other derived from a bootstrapped KDE calculation of differential entropy. We refer to the testing principal as the Difference in Differential Entropy (DDE) approach for testing hypotheses about population distributions. Unlike some alternatives in the literature, there is no need for the user to specify tuning parameters, choose evaluation points or grids, or be concerned with complicated idiosyncratic regularity conditions. The user needs only to choose a parametric family of distributions as the null distribution from a large catalogue of possible families, and the test is then automated from that point forward. Sampling experiments illustrate the application of the test and its finite sample behavior, including its size accuracy and substantial empirical power even for relatively small samples of data.

Understanding Treatment Effects Under Various Forms of Dependence

Bryan S. Graham
,
University of California-Berkeley
Michael Jansson
,
University of California-Berkeley
Yassine Sbai Sassi
,
New York University

Abstract

Efficient estimate of average treatment effects (ATEs) under random assignment to independent units is well understood. Here we consider ATE estimation in the presence of dyadic dependence across units. We explore the impact of such dependence on achievable rates-of-convergence and optimal experimental design.

Information and Decision Theoretic Approach to Partial Identification

Amos Golan
,
American University

Abstract

An information-theoretic maximum entropy (ME) model provides an alternative approach to finding solutions to partially identified models. In these models, we can identify only a solution set rather than point-identifying the parameters of interest, given our limited information. Manski (2021) and others propose using statistical decision functions in general, and the minimax-regret (MMR) criterion in particular, to select a unique solution. Using Manski’s simulations for a missing data and a treatment problem, including an empirical example, we show that ME performs as well as or better than MMR. In additional simulations, ME dominates various other statistical decision functions. ME has an axiomatic underpinning and is computationally efficient.

Discussant(s)
Daniel McFadden
,
University of California-Berkeley
JEL Classifications
  • C4 - Econometric and Statistical Methods: Special Topics
  • Q1 - Agriculture