March 14, 2018
Cite your sources
What can economists’ citations tell us about the field?
Citation counts are a more informative measure of individual achievement than institutional or journal reputation.
Author Daniel Hamermesh believes that not everyone will be happy about his latest research, but he’s not bothered by that: “It’s our job to make people think about things,” he says. With this article, Hamermesh hopes that economists will start to recognize citation counts as one of the best measures of research quality, despite the traditional importance of journal and institutional reputation in assessing economists’ contributions to the field.
The paper, which appears in the March issue of the Journal of Economic Literature, examines the role of citations within the economics profession. Because citations capture how much literature builds on a piece of scholarship, they act as a market-based measure of research importance.
As Hamermesh finds, citations are the most reliable measure of economists’ contributions to the field, and they are especially more informative than reputation. This includes the reputations of both the institutions where economists conduct research and the journals in which they publish. Putting a greater emphasis on citations may be difficult for those who, like many economists, have traditionally seen institutional and journal reputation as a strong indicator of individual achievement. Nevertheless, developing accurate and objective measurements is a worthy goal, and citations seem to be the best option.
Within top American universities, each economics department has a broad range of lifetime citations among their professors. This is the trouble with relying on institutional reputation; working in a renowned department does not necessarily mean that you are producing highly cited work. In fact, most scholars in the top quartiles of their department (based on number of lifetime citations) who are in schools ranked 11 through 20 are more heavily cited than the median scholar of departments ranked one through ten. Similarly, journals publish articles with varying levels of impact. Top journals often publish papers that receive few citations, and other journals publish papers that become influential. In both cases, reputation is not enough to judge the quality of a researcher or their papers.
[Without citations], we’re just at the mercy of who can yell the loudest.
Hamermesh also finds that short-term success usually determines long-term popularity. Papers with few citations in their early years almost never become famous, and papers that do well initially continue to accumulate citations over time. In a comparison of papers from 1974-75, none of the articles in the bottom twenty percent of citations (within the first ten years of publication) have reached the top twenty percent. Moreover, only a handful of papers in the top fifth of initial citations have become ignored; most continue to be well-cited decades later.
Of course, these conclusions may depend on how people construct citation statistics. Lifetime citations seem objective, but they favor older economists who have had more time to publish than PhD students. Citation information can come from several sources, which do not always come up with the same results. However, the source of the citation information and exact statistics used rarely matter in practice. For example, Hamermesh uses several statistics to examine the productivity of entire economics departments and rank them. Whether ranking a university by the lifetime citations of its median-cited scholar or the scholar at the 90th percentile, the list remains largely the same. Even when adjusting the number of citations to account for age, the rankings do not meaningfully change.
Citations are a useful measurement of impact, but they can also make an impact themselves. This is particularly true for professors’ salaries, which makes sense: institutions pay researchers to create knowledge, the value of which is reflected in their citations. Additional citations do increase salary, and, somewhat surprisingly, the magnitude of these increases are independent of journal quality. Citations also matter in determining who receives honors and awards, among other factors. Yet, “best article” awards are rarely awarded to the most-cited article in a volume (although they do go to highly cited works).
Hamermesh believes that economists currently undervalue the impact of citation counts, especially relative to how much they value other (and often more subjective) measures of accomplishment, like reputation. Citations should not be the only criterion when valuing a researcher’s work, but given the heterogeneity within institutions and journals, it is important to look at individuals and individual articles more carefully. For example, economists could make an intentional effort to emphasize citations in salary-setting, promotions, and appointments.
Ultimately, putting more stock in citations could make the profession fairer, Hamermesh remarked: “Otherwise, we’re just at the mercy of who can yell the loudest.”
"Citations in Economics: Measurement, Uses, and Impacts" appears in the March issue of the Journal of Economic Literature.