« Back to Results

Methodological Advances in Environmental Economics

Paper Session

Sunday, Jan. 5, 2025 10:15 AM - 12:15 PM (PST)

Parc 55, Divisadero
Hosted By: Association of Environmental and Resource Economists
  • Chair: Catherine Kling, Cornell University

Causality and Resilience

Paul Ferraro
,
Johns Hopkins University

Abstract

Hundreds of empirical studies report on the factors that increase resilience to negative shocks in coupled
human-nature systems. Yet, in a set of 500 empirical studies that investigate resilience to climate shocks, I find that most studies fail to clearly define their target causal effects (estimands) or explain how their design identifies these effects. In the studies that specify the target causal effect, many use an empirical design in which the effect is not identified. Thus, the empirical literature on resilience to climate shocks is largely uninterpretable. In this study, I start with the idealized experiment: a factorial experiment in which shocks and attributes that improve resilience to the shock are randomized across units from a target population (“units” like households or watersheds). For logistical and ethical reasons, we cannot run such an experiment. However, the idealized experimental design implies that resilience scholars who use observational designs, like experimentalists who use factorial designs, can estimate multiple causal effects, some of which are more policy and scientifically relevant than others. I define these causal effects, describe their utility in terms of common scientific and policy questions, describe how identification and estimation of each causal effect requires a different observational design, and assess the climate resilience literature in light of these findings. Without this information, scientists will struggle to develop appropriate empirical designs for investigating resilience and will find it challenging to evaluate the quality of published empirical studies on resilience. I conclude by describing a set of best practices for empirical studies of resilience in a range of literatures, including climate science, economics, ecology, cognitive science (brain resilience), political science, and sociology.

Learning, Catastrophic Risk and Ambiguity in the Climate Change Era

Frances Moore
,
University of California-Davis

Abstract

Climate--the probability distribution over weather--is not directly observable. Instead it must be
estimated, typically using the historical weather record. The assumption that the climate is stationary and
that therefore the set of historic observations is representative of today has been central to both
engineering and financial methods for weather risk management. Anthropogenic climate change
undermines this assumption, rendering past weather observations potentially uninformative of the current
distribution of weather risks. This reduces the information available to actors and increases uncertainty in the estimated climate distribution. Using a motivating case-study of extreme rainfall related flood
damages in New York City, this paper develops a Bayesian learning model, applied to a long record of
daily rainfall intensity, and shows how relaxing the stationarity assumption increases variance in the
posterior climate distribution. This uncertainty can interact with a steeply non-linear damage function
(derived from claims under the National Flood Insurance Program) to greatly increase the mean and
variance of the loss distribution. I show how the added uncertainty simply from relaxing the stationarity
assumption, with no change in historic weather data or the damage function, could ripple through
insurance markets in the form of higher and more volatile premiums and higher reinsurance costs, with
limited potential for diversification within the insurance sector. These effects are consistent with observed changes in the U.S. property insurance market in recent years.

Difference-in-Differences with Endogenous Externalities: Model and Application to Climate Econometrics

Sandy Dall'erba
,
University of Illinois-Urbana-Champaign
Andre Chagas
,
University of Sao Paulo
Yilan Xu
,
University of Illinois-Urbana-Champaign
William Ridley
,
University of Illinois-Urbana-Champaign

Abstract

Recent literature has highlighted the importance of incorporating spatial dependence within the
difference-in-differences DID framework. It occurs when the treatments are spatially correlated and/or the
individuals’ responses to the treatment are prone to spatial autocorrelation. Spatially autocorrelated
treatments do not violate the stable unit treatment value assumption (SUTVA). However, spatially
autocorrelated responses violate the SUTVA, leading to potentially biased and inconsistent DID estimates
of treatment effects when spillovers are disregarded. In this paper, we extend SDID by considering the
case where regions are connected in an economic network that is prone to changes in response to the
treatment. We name it the instrumental variable network difference-in-difference process, or IV-NDID for
short. This framework accounts for endogeneity of the network to the treatment in a first-stage regression
while the role of the treatment on the treated areas and on any member of the network is measured in the
second stage. As such, our approach differs from other contributions in which the network is endogenous
but is time-invariant. We apply this approach to the impact of drought (the treatment) on the global wheat
trade first and, second, on wheat production and area planted. Our results indicate that local wheat
production and area planted react negatively to a local drought but positively to a drought in destination
places. As such, failing to account for the transmission of the treatment effect through the trade network,
as well as the adjustment of the trade network itself in response to the treatment, leads to underestimates
of the impact of drought on agriculture. Additional research in this area aims at enlarging the applications of IV-NDID to other network structures such as peer-effects, supply-chains and migration flows.

Valuing Policy Characteristics and New Products using a Simple Linear Program

Spencer Banzhaf
,
North Carolina State University

Abstract

The Random Utility Model (RUM) is a workhorse model for valuing new products or changes in public
goods. But RUMs have been faulted along two lines. First, for including idiosyncratic errors that imply
unreasonably high values for new alternatives and unrealistic substitution patterns. Second, for involving
strong restrictions on functional forms for utility. This paper shows how, starting with a RUM framework,
one can nonparametrically set-identify the answers to policy questions using only the Generalized Axiom
of Revealed Preference (GARP). When GARP is satisfied, the approach set identifies a pure
characteristics model. When GARP is violated, the approach recasts the RUM errors as departures from
GARP, to be minimized using a minimum-distance criterion. This perspective provides another avenue
for nonparametric identification of discrete choice models. The paper illustrates the approach by
estimating bounds on the values of ecological improvements in the Southern Appalachian Mountains
using survey data.

Discussant(s)
Patrick Baylis
,
University of British Columbia
Brigitte Roth Tran
,
Federal Reserve Bank of San Francisco
Pierre Mérel
,
University of California-Davis
Catherine Kling
,
Cornell University
JEL Classifications
  • Q5 - Environmental Economics
  • C1 - Econometric and Statistical Methods and Methodology: General