+11 votes
asked ago in Current Economic Issues by (6.9k points)
reshown ago by
commented ago by (900 points)
"reshown 6 hours ago by EconSpark Admin" What does that mean?
commented ago by (780 points)
The question was accidentally hidden by a user, and has now been reshown at their request.
commented ago by (6.9k points)
Yes, that was me. I was investigating the edit buttons and inadvertently made the whole thread vanish.

3 Answers

+6 votes
answered ago by (3.3k points)
edited ago by
This is a challenging case and one that far smarter people than I have weighed in on, so my opinion or impression doesn't really matter one way or another.  This is also a controversial question, but the only way this econspark thing will work is if we debate some things that aren't entirely obvious (like obviously do not call the chair of the search committee to signal interest).  

So this isn't about coding errors or anything so simple, this is about the nuance around sample construction, and controls as I understand it.  

Question 1:  Should legacy and student athletes be included in the main model if they are treated differently.  If this was a paper at a conference, I can imagine a debate in both directions.  They are 1/3 of admits, so that's a ton of data to throw away.  At the same time, test scores and grades for them might map differently into admission, so I can envision a model that fully interacts on those variables or running the model separately for legacy/athletes than the general population.  If I had written a paper picking either approach, referee 2 would suggest I do the opposite.  

Question 2:   Should we control for multi-dimensionality measured via interviews? On one hand, this is reasonable because its part of the admissions process, and perhaps a key one when so many students have perfect test scores and grades.  However, we also might worry this is exactly where a thumb gets put on the scale, intentionally, or unintentionally.  I think interviews can be important and insightful.  In our hiring process over the years, some candidates that look great on paper can't answer straightforward questions about their research,  future agenda, or teaching.  So there clearly is information you can gain from interviews.  The challenge is whether my assessment is free of bias.  While I try to be subjective, my guess I have some biases I'm largely unaware of (and sometimes embarrassingly, I have become aware of and I have tried to correct).  

Is there likely some level of bias in admissions to colleges?  I would guess yes.  If only because if we have found racial, gender and other  biases for judges, police officers, home owners, car salespeople, shoppers online, managers, doctors, patients, students taking online classes, or more recently editors and referees at academic journals, why should expect admissions officers to be free bias individually, or employed in a random enough way that their biases happen to balance and perfectly offset each other.  Its a bit easier for me to believe we're in a world where all humans are have some level implicit bias, rather than assuming we're in a world where some people are perfect and others deeply flawed.  A bigger question is how do we alter or change systems to limit or mitigate the impact of those implicit biases.  I'm sure many if not most universities are sensitive to the issues and design their systems to try to keep admissions fair but as long as humans are involved we're going to make mistakes, won't we?   

The true amount of bias in admissions?  My coarse metric of the true bias is based on partial identification and has some bounds.  Probably somewhere between the experts hired to say its big, and those hired to defend that its negligible.  Like most bounding methods its a wide confidence interval, but hopefully we can sharpen those bounds. I'd need to spend a little more time reading the briefs or seeing the presentations in a seminar or conference (any chance for a public presentation in a special session at the AEA's?), to update my priors on how narrow down my bounds. The best way to measure bias would probably be some form of an audit study, as those have proven insightful ways to measure discrimination in housing market decades ago, and recently have become quite popular for measuring employment discrimination (and others).  


Equally important in this debate are also the externalities of who your classmates are.  Cited in the article was also the testimony of Dr. Simmons who discussed the importance of diversity in terms of exposing students to unfamiliar perspectives.  Those perspectives can create debate and conflict with it, knowledge.  I think measuring and understanding those externalities is equally important in this debate.

Now your turn.  What do you think?
+9 votes
answered ago by (1.3k points)
It's been a few months, but I've read the original reports from Card and Arcidiacono as well as the rebuttal by Card, the amicus brief in support of Card (by Susan Dynarski, Harry Holzer, Hilary Hoynes, Guido Imbens, Alan Krueger, Helen Ladd, David Lee, Trevon Logan, Alexandre Mas, Michael McPherson, Jesse Rothstein, Cecilia Rouse, Robert Solow, Lowell Taylor, Sarah Turner, and Douglas Webber) and the amicus brief in support of Arcidiacono (by Michael Keane, Hanming Fang, Yingyao Hu, Glenn Loury, and Matt Shum). For simplicity, I'll refer to these groups as Team Card and Team Peter.

To start, I think many on Team Card are biased because they view this case as a direct threat to affirmative action. I don't think that's the case.  There is no affirmative action argument for why whites should have an advantage over Asians in college admissions, and that's what I think this case is primarily about (but I'm not a lawyer). I have my own biases, which I disclose at the end.

The empirical arguments come down to a few issues. Some of the issues that are argued over (seemingly endlessly) end up not making much of a difference. The primary issue in this category is whether to pool admission years. Team Peter says pool and Team Card say don't because every year has a completely different applicant pool and people working on the admissions committee (although a member of team Card has a paper where college admission is an outcome and the years are pooled).

The issue that is discussed in the article is whether people in special admission categories should be included in the analysis. I tend to agree with Peter here because they seem to be judged by a completely different standard (certainly more so than the standards across years). What I would like to see is a version of the model where they are included but also include interactions. This and the pooling issue are ultimately empirical questions and neither side definitely answered (in my opinion). In both cases, one specification is a nested version of the other, and you could run a simple test.

The biggest issue is whether or not to include the "personal rating". This variable is a summary measure of things that aren't captured by the other measures. There are 4 measures used in admissions: academic, extra curricular, personal, and athletic. Asians generally perform better on academic and extra curricular than whites, but perform worse on the personal rating. Excluding the personal rating makes it look like Harvard is discriminating against Asians (holding them to a higher standard on the other measures). Team Peter argues that race is a part of this measure and it should be excluded.  Team David argues that it includes important information not captured by other measures. I kind of agree with both. Race sure seems to be an important part of this measure (regardless of what Harvard says - any racial consideration is supposed to happen later in the process). But the measure also includes relevant information that can't be captured elsewhere. My biggest problem is that the measure seems to be highly subjective and it's not really clear how it's constructed. These sort of fuzzy subjective measures could easily be used to implement taste-based discrimination against any group, but ultimately the case should be decided on its own merits and not concerns about potential future consequences from the ruling.

I generally favor team Peter's approach, but I don't think either analysis is totally compelling. Ultimately, I think it will come down to some of the other evidence. On the basis of the other evidence, I'm more inclined to support the plaintiffs.

Some other points:

At one point in Card's analysis he misinterprets a McFadden pseudo R2 to imply that Arcidiacono's model had a poor fit. Given that he almost certainly knows how to correctly interpret it, it really made me question his integrity. I get that's he's supposed to spin his opponents results and that's what he's getting paid to do, but this seems a bit much. Maybe that's just accepted practice for an expert witness.

One criticism I have of team Card is that they generally take Harvard's word when it comes to specifying how the admissions process works and what characteristics are important in admissions. They do this to argue against including an interaction term that Arcidiacono includes, and in his testimony he mentions that Arcidiacono doesn't understand key aspects of the admissions process. In this case, I don't think it is appropriate to appeal to Harvard's stated preferences when arguing over specifications since the entire issue is whether Harvard follows their stated procedure.

I have some personal biases in this case:
1. Peter was on my committee and wrote a letter of recommendation for me when I was on the market.
2. At least one person on team Card (and probably a lot more than 1) hates me which makes me a lot less inclined to support them.
3. I'm half Asian, although I never applied to any of these elite schools. Coming out of high school, I was a lot closer to zero-dimensional excellence than being excellent in multiple dimensions.
0 votes
answered ago by (1k points)
Seems pretty obvious that legacies should be excluded. If anything, because discrimination is only found when looking at the marginal admits by race (not at admit rates by race), anything not marginal just confounds the analysis. Peter more credible.
...