« Back to Results

George Judge: 100 and Counting. Econometrics in Agricultural and Applied Economics

Paper Session

Sunday, Jan. 4, 2026 8:00 AM - 10:00 AM (EST)

Philadelphia Marriott Downtown
Hosted By: Agricultural and Applied Economics Association
  • Chair: Jill J. McCluskey, Washington State University

George Judge’s Contributions to Econometrics in Agricultural and Applied Economics

Gordon Rausser
,
University of California-Berkeley
Sofia Villas-Boas
,
University of California-Berkeley

Abstract

This presentation discusses George Judge’s contributions, which have significantly impacted the field of econometrics through his innovative research, influential textbooks, and role as a mentor and educator.

George Garrett Judge’s body of work constitutes one of the most intellectually coherent and forward-looking research programs in modern econometrics, spanning Stein-rule estimation, spatial equilibrium modeling, and information-theoretic inference. What appears at first to be a diverse set of contributions is in fact organized around a single foundational question: How can economists recover reliable information about complex systems from noisy, incomplete, and imperfect data? Judge approached this challenge by advancing new estimators, reformulating spatial general equilibrium, and ultimately developing an entropy-based framework that integrates information theory, statistical mechanics, and computational methods. His vision redefines econometrics for an information-rich but uncertainty-dominated world, emphasizing epistemological humility, out-of-sample predictive performance, and the dynamic recovery of information over static parameter estimation. Across more than 150 articles, 16 books, and decades of mentorship, Judge reshaped agricultural economics, applied economics, and econometrics more broadly.

Testing Parametric Distribution Family Assumptions via Differences in Differential Entropy

Ron C. Mittelhammer
,
Washington State Univesity
George Judge
,
University of California-Berkeley
Miguel Henry
,
OnPoint Analytics

Abstract

We present a widely applicable statistical testing procedure for assessing the validity of hypotheses regarding which parametric distribution family generated a random sample of data. The test provides a unified framework for conducting such tests across a wide scope of distribution families, and has asymptotic validity derived directly from maximum likelihood, bootstrapping, and kernel density estimator (KDE) principles. The test is straightforward to implement, interpret, and can be efficiently calculated. The test is based on the concept of maximum entropy distributions and the degree of divergence between two different sample estimates of differential entropy, one being a direct MLE estimate under the null hypothesis and the other derived from a bootstrapped KDE calculation of differential entropy. We refer to the testing principal as the Difference in Differential Entropy (DDE) approach for testing hypotheses about population distributions. Unlike some alternatives in the literature, there is no need for the user to specify tuning parameters, choose evaluation points or grids, or be concerned with complicated idiosyncratic regularity conditions. The user needs only to choose a parametric family of distributions as the null distribution from a large catalogue of possible families, and the test is then automated from that point forward. Sampling experiments illustrate the application of the test and its finite sample behavior, including its size accuracy and substantial empirical power even for relatively small samples of data.

Understanding Treatment Effects Under Various Forms of Dependence

Bryan S. Graham
,
University of California-Berkeley
Michael S. Jansson
,
University of California-Berkeley
Yassine Sbai Sassi
,
New York University

Abstract

Efficient estimate of average treatment effects (ATEs) under random assignment to independent units is well understood. Here we consider ATE estimation in the presence of dyadic dependence across units. We explore the impact of such dependence on achievable rates-of-convergence and optimal experimental design.

Information and Decision Theoretic Approach to Partial Identification

Amos Golan
,
American University

Abstract

The available information is usually too complex, insufficient and imperfect to deliver a unique solution for most modeling and inference problems. In fact, unless some strong unverifiable assumptions are imposed, there are multiple solutions consistent with the known information and data. This is particularly true in the social, behavioral and economic sciences as well as other complex and evolving systems. Such problems are called underdetermined. One approach for handling such problems is an Information-Theoretic (IT) decision-theoretic approach within a constrained optimization setup. In this approach, all information enters as constraints, and the decision function is an information-theoretic one. Priors (non-sample information) enter in the decision function. Another way for handling such problems is via the partial identification approach, developed mostly since the 1990’s by Manski and commonly used in recent econometric studies (Manski, 2003, Tamer, 2010). With this approach, a set of possible solutions is determined (called the ‘identified set’) conditional on the limited information. The unique solution of interest is within that set. In this talk I show the interrelationship between these two distinct approaches for handling partially identified, or underdetermined, problems. To do so, I build on Manski’s recent work on statistical decision theory for evaluating models in decision making (Manski, 2021). I will show under what assumptions, the two yield the same solutions and under what conditions they differ. I demonstrate that the IT approach is simpler, based on a smaller set of axioms and dominates Manski’s approach for certain types of problems. I will present the main results via simple examples accompanied by visual representations of the theory. I will then present simulated experiments to contrast the two approaches for modeling partially identified problems. To analyze these experiments, I use the MinMax Regret as was proposed by Manski (2021).

Discussant(s)
Daniel McFadden
,
University of California-Berkeley
JEL Classifications
  • C4 - Econometric and Statistical Methods: Special Topics
  • Q1 - Agriculture