Assessing Bias in Value-Added Models

Paper Session

Sunday, Jan. 8, 2017 3:15 PM – 5:15 PM

Hyatt Regency Chicago, Field
Hosted By: Econometric Society
  • Chair: Philip Gleason, Mathmatica Policy Research

Revisiting the Impacts of Teachers

Jesse Rothstein
,
University of California-Berkeley

Abstract

Chetty, Friedman, and Rockoff (2014a,b) study value-added (VA) measures of teacher effectiveness. CFR (2014a) exploits teacher switching as a quasi-experiment, concluding that student sorting creates negligible bias in VA scores. CFR (2014b) finds VA scores are useful proxies for teachers’ effects on students’ long-run outcomes. I successfully reproduce each in North Carolina data. But I find that the quasi-experiment is invalid, as teacher switching is correlated with changes in student preparedness. Adjusting for this, I find moderate bias in VA scores, perhaps 10-35% as large, in variance terms, as teachers' causal effects. Long-run results are sensitive to controls and cannot support strong conclusions.

Validating Teacher Effect Estimates Using Changes in Teacher Assignments in Los Angeles

Thomas Kane
,
Harvard University
Douglas Staiger
,
Dartmouth College
Andrew Bacher-Hicks
,
Harvard University

Abstract

We evaluate the degree of bias in teacher value-added estimates from Los Angeles using a “teacher switching” quasi-experiment proposed by Chetty, Friedman, and Rockoff (2014a). We have three main findings. First, we confirm that value-added is an unbiased forecast of teacher impacts on student achievement, and this result is robust to a range of specification checks. Second, we find that value-added estimates from one school provide unbiased forecasts of a teacher’s impact on student achievement in a different school. Finally, we document systematic differences in the effectiveness of teachers by student race, ethnicity and prior achievement that expand gaps in achievement, rather than close them.

Does It Matter How Teacher Effectiveness Is Measured? Assessing Bias in Alternative Value-Added Models Using Data From Multiple Districts

Eric J. Isenberg
,
Mathematica Policy Research
Elias Walsh
,
Mathematica Policy Research
Philip Gleason
,
Mathematica Policy Research
Jeffrey Max
,
Mathematica Policy Research

Abstract

t We measure the amount of bias in value-added estimates using the approach developed by Chetty, Friedman, and Rockoff (2014) that is based on teacher transitions. This approach compares changes over time in average test scores within schools to changes in the value-added estimates of teachers in the schools. We extend previous applications of the method by applying it to data in multiple districts and several value-added models. Our data include a geographically diverse set of 20 districts of varying size with teacher value added in the five school years from 2008-2009 to 2012-2013. We compare bias in value-added models that include or exclude some common features such as accounting for measures of students’ peers in the same classroom or addressing measurement error in pre-test scores. We then examine how using a value-added model identified as more biased affects teachers of certain students, such as disadvantaged students. Finally, we describe some pitfalls to avoid when applying the method to small or moderate sized districts or short panels, and contribute to a discussion about potential threats to the validity of the bias estimates.
Discussant(s)
Kirabo Jackson
,
Northwestern University
Richard Mansfield
,
Cornell University
JEL Classifications
  • I2 - Education and Research Institutions