India's population growth, legalizing marijuana, and dominant digital platforms
Street traffic in Varanasi, India.
India’s Ministry of Finance has published Economic Survey 2018–19 in two volumes. The wide-ranging report is impossible to summarize, but here are a few tastes. On demography:
India is set to witness a sharp slowdown in population growth in the next two decades. . . . It will surprise many readers to learn that population in the 0–19 age bracket has already peaked due to sharp declines in total fertility rates (TFR) across the country. . . . Contrary to popular perception, many states need to pay greater attention to consolidating/merging schools to make them viable rather than building new ones. At the other end of the age scale, policy makers need to prepare for ageing.
On resolving contractual disputes:
Arguably the single biggest constraint to ease of doing business in India is now the ability to enforce contracts and resolve disputes. This is not surprising given the 3.5 crore [that is, ten million] cases pending in the judicial system. . . . Contrary to conventional belief, however, the problem is not insurmountable. A case clearance rate of 100 per cent (i.e. zero accumulation) can be achieved with the addition of merely 2,279 judges in the lower courts and 93 in High Courts even without efficiency gains. . . . Given the potential economic and social multipliers of a well-functioning legal system, this may well be the best investment India can make. . . . As a concerted effort made in the enactment and implementation of the [Insolvency and Bankruptcy Code], India improved its ‘Resolving Insolvency’ ranking from 134 in 2014 to 108 in 2019. . . . India won the Global Restructuring Review (GRR) award for the most improved jurisdiction in 2018. Financial Sector Assessment Program of IMF-World Bank in January 2018 observed: ‘India is moving towards a new state of the art bankruptcy regime.’
On India’s minimum wage:
[T]he present minimum wage system in India is extremely complex with 1,915 minimum wages defined for various scheduled job categories for unskilled workers across various states. Despite its complex structure and proliferation of scheduled employments over time, the Minimum Wages Act, 1948 does not cover all wage workers. One in every three wage workers in India has fallen through the crack and is not protected by the minimum wage law.
India Ministry of Finance (2019)
The Congressional Budget Office evaluates “The Effects on Employment and Family Income of Increasing the Federal Minimum Wage” :
As of 2019, 29 states and the District of Columbia have a minimum wage higher than the federal minimum. . . . The minimum wage is indexed to inflation in 17 of those states, and future increases have been mandated in 6 more. . . . About 60 percent of all workers currently live in states where the applicable minimum wage is more than $7.25 per hour. And in 2025, about 30 percent of workers will live in states with a minimum wage of $15 or higher.
The median CBO estimate for a phased-in rise in the minimum wage to $15 per hour includes:
[A]bout 1.3 million workers who would otherwise be employed would be jobless in an average week in 2025. . . . Wages would rise, however, for 17 million directly affected workers who remained employed and for many of the 10 million potentially affected workers whose wages would otherwise fall slightly above $15 per hour. . . . Almost 50 percent of the newly jobless workers in a given week—600,000 of 1.3 million—would be teenagers (some of whom would live in families with income well above the poverty threshold). Employment would also fall disproportionately among part-time workers and adults without a high school diploma. . . . That net effect is due to the combination of factors described above:
• Real earnings for workers while they remained employed would increase by $64 billion,
• Real earnings for workers while they were jobless would decrease by
• Real income for business owners would decrease by $14 billion, and
• Real income for consumers would decrease by $39 billion.
Mark A. R. Kleiman lays out some trade-offs in “The Public-Health Case for Legalizing Marijuana”:
John Kenneth Galbraith once said that politics consists in choosing between the disastrous and the unpalatable. The case of cannabis, an illicit market with sales of almost $50 billion per year, and half a million annual arrests, is fairly disastrous and unlikely to get better. . . . The choice we now face is not whether to make cannabis available, but whether its production and use should be legal and overt or illegal and at least somewhat covert.
Cannabis, even as an illegal drug, is a remarkably cost-effective intoxicant, far cheaper than alcohol. For example, in New York City, where cannabis is still illegal, a gram of fairly high-potency material (say, 15% THC by weight) goes for about $10. A user can therefore obtain 150 milligrams of THC for $10, paying about 7 cents per milligram. Getting stoned generally requires around 10 milligrams of THC to reach the user’s bloodstream, but the smoking process isn’t very efficient; about half the THC in the plant gets burned up in the smoking process or is exhaled before it has been absorbed by the lungs. So a user would need about 20 milligrams of THC in plant material to get stoned, or a little less than $1.50 worth. For a user without an established tolerance, intoxication typically lasts about three hours. That works out to about 50 cents per stoned hour.
So it costs a typical man drinking beer about $4 to get drunk—typically for a couple of hours—and staying drunk costs an additional $1 per hour. . . . Over the past quarter-century, the population of “current” (past-month) users has more than doubled (to 22 million) and the fraction of those users who report daily or near-daily use has more than tripled (to about 35%). . . . Between a third and a half of them report the symptoms of Cannabis Use Disorder: They’re using more, or more frequently, than they intend to; they’ve tried to cut back or quit and failed; cannabis use is interfering with their other interests and responsibilities; and it’s causing conflict with people they care about.
National Affairs (2019)
Danit Kanal and Joseph Ted Kornegay describe “Accounting for Household Production in the National Accounts: An Update, 1965–2017”:
To compute household production, we first aggregated household production hours across seven categories: housework, cooking, odd jobs, gardening, shopping, child care, and domestic travel. The value of nonmarket services is the product of the wage rate of general-purpose domestic workers and the number of hours worked. . . . Household production has declined in significance over time as more women engage in market work. This sector accounted for 37 percent of the satellite account’s output in 1965, but that declined to 23 percent in 2017.
Survey of Current Business (2019)
Competition in the Digital Economy
An expert panel formed by the UK government, made up of Jason Furman, Diane Coyle, Amelia Fletcher, Philip Marsden, and Derek McAuley, has written Unlocking Digital Competition: Report of the Digital Competition Expert Panel:
There is nothing inherently wrong about being a large company or a monopoly and, in fact, in many cases this may reflect efficiencies and benefits for consumers or businesses. But dominant companies have a particular responsibility not to abuse their position by unfairly protecting, extending or exploiting it. . . . Acquisitions have included buying businesses that could have become competitors to the acquiring company (for example Facebook’s acquisition of Instagram), businesses that have given a platform a strong position in a related market (for example Google’s acquisition of DoubleClick, the advertising technology business), and data-driven businesses in related markets which may cement the acquirer’s strong position in both markets (Google/YouTube, Facebook/WhatsApp). Over the last 10 years the 5 largest firms have made over 400 acquisitions globally. None has been blocked and very few have had conditions attached to approval, in the UK or elsewhere, or even been scrutinised by competition authorities.
UK government (2019)
Fiona Scott Morton, Pascal Bouvier, Ariel Ezrachi, Bruno Jullien, Roberta Katz, Gene Kimmelman, A. Douglas Melamed, and Jamie Morgenstern have written Committee for the Study of Digital Platforms: Market Structure and Antitrust Subcommittee Report, published by the Stigler Center at the University of Chicago:
By looking at the sub-industries associated with each firm—social platforms (Facebook), internet software (Google), and internet retail (Amazon)—a different trend emerges. Since 2009, change in startup investing in these sub-industries has fared poorly compared to the rest of software for Google and Facebook, the rest of retail for Amazon, and the rest of all VC for each of Google, Facebook, and Amazon. This suggests the existence of so-called ‘kill-zones,’ that is, areas where venture capitalists are reluctant to enter due to small prospects of future profits. In a study of the mobile app market, Wen Wen and Feng Zhu come to a similar conclusion: Big tech platforms do dampen innovation at the margin.
Stigler Center (2019)
A group of outside advisers for the European Commission, made up of Jacques Crémer, Yves-Alexandre de Montjoye, and Heike Schweitzer, has written Competition Policy for the Digital Era:
Data is acquired through three main channels. First, some data is volunteered, i.e. intentionally contributed by the user of a product. A name, email, image/video, calendar information, review, or a post on social media would qualify as volunteered data. . . . Second, . . . many activities leave a digital trace, and ‘observed data’ refers to more behavioural data obtained automatically from a user’s or a machine’s activity. The movement of individuals is traced by their mobile phone; telematic data records the roads taken by a vehicle and the behaviour of its driver; every click on a page web can be logged by the website and third party software monitors the way in which its visitors are behaving. . . . Finally, some data is inferred, that is obtained by transforming in a non-trivial manner volunteered and/or observed data while still related to a specific individual or machine. This will include . . . categories resulting from clustering algorithms or predictions about a person’s propensity to buy a product, or credit ratings. The distinction between volunteered, observed and inferred data is not always clear. . . . [W]e will also consider how data is used. We will define four categories of uses: non-anonymous use of individual-level data, anonymous use of individual level data, aggregated data, and contextual data.
European Commission (2019)
These three reports dovetail and overlap in a number of ways, and also complement the two-paper “Symposium on Issues in Antitrust” in the Summer 2019 issue of this journal.
The Harvard Data Science Review has published its first issue. Among the essays, Alan M. Garber offers a broad-based essay on “Data Science: What the Educated Citizen Needs to Know.” Mark Glickman, Jason Brown, and Ryan Song use a machine learning approach to figure out whether Lennon or McCartney is more likely to have authored certain songs by the Beatles that are officially attributed to both, in “(A) Data in the Life: Authorship Attribution in Lennon-McCartney Songs.” Michael I. Jordan contributes “Artificial Intelligence—The Revolution Hasn’t Happened Yet,” which is followed by eleven comments and a rejoinder from Jordan entitled “Dr. AI or: How I Learned to Stop Worrying and Love Economics.”
Am I arguing that we should simply bring in microeconomics in place of computer science? And praise markets as the way forward for AI? No, I am instead arguing that we should bring microeconomics in as a first-class citizen into the blend of computer science and statistics that is currently being called ‘AI.’ Indeed, classical recommendation systems can and do cause serious problems if they are rolled out in real-world domains where there is scarcity. Consider building an app that recommends routes to the airport. If few people in a city are using the app, then it is benign, and perhaps useful. When many people start to use the app, however, it will likely recommend the same route to large numbers of people and create congestion.
The best way to mitigate such congestion is not to simply assign people to routes willy-nilly, but to take into account human preferences—on a given day some people may be in a hurry to get to the airport and others are not in such a hurry. An effective system would respect such preferences, letting those in a hurry opt to pay more for their faster route and allowing others to save for another day. But how can the app know the preferences of its users? It is here that major IT companies stumble, in my humble opinion. They assume that, as in the advertising domain, it is the computer’s job to figure out human users’ preferences, by gathering as much information as possible about their users, and by using AI. But this is absurd; in most real-world domains—where our preferences and decisions are fine-grained, contextual, and in-the-moment—there is no way that companies can collect enough data to know what we really want. Nor would we want them to collect such data—doing so would require getting uncomfortably close to prying into the private thoughts of individuals. A more appealing approach is to empower individuals by creating a two-way market where (say) street segments bid on drivers, and drivers can make in-the-moment decisions about how much of a hurry they are in, and how much they’re willing to spend (in some currency) for a faster route.
The Harvard Data Science Review (2019)
The Peter G. Peterson Foundation has commissioned 31 research papers as part of its “US 2050” project, falling into the broad categories of population, early investments in children, employment and adult workers, caregiving, retirement, and politics. In the “caregiving” category, here’s Stipica Mudrazija in “Work-Related Opportunity Costs of Providing Unpaid Family Care”:
Accounting for future population aging and trends in physical disability and adjusting for compositional changes of the future population, the number of caregivers needed to keep the current prevalence of unpaid caregiving constant would have to almost double. . . . Therefore, future discussions of the role of unpaid family care should recognize that this is a finite and increasingly expensive resource.
In another paper, Gal Wettstein and Alice Zulkarnain ask, “Will Fewer Children Boost Demand for Formal Caregiving?”
The authors estimate that, among people over age 50, having one fewer child increases the probability of having spent a night in a nursing home in the last two years by 1.7 percentage points—a magnitude comparable to the effect of having poor self-reported health, or of being ten years older.
The Peter G. Peterson Foundation (2019)
The Russell Sage Foundation Journal of the Social Sciences has published a double issue with 13 papers illustrating the theme of Using Administrative Data for Science and Policy—which is also the title of the introductory essay by Andrew M. Penner and Kenneth A. Dodge:
Research using administrative data has much in common with history and archeology, insofar as it observes the tracks that individuals leave as they move through society and draws lessons from these glimpses into their lives. . . . Given their origin in a particular institutional context, administrative records are typically fragmented, and these data are often not linked to other data that would be useful for research and policy. Hospitals, for example, collect detailed information about patients’ health, schools regularly collect information about student development, and employers often keep records not only about the performance of employees, but also about applicants who were ultimately not offered positions. Although various combinations of these data can provide important insights, they are typically compartmentalized. Likewise, given their origin, administrative records often lack certain kinds of information that are less likely to be collected in these records. For example, information about attitudes, affinities, and motives are not often collected in administrative records. Combining administrative data with records from other sources—either by linking administrative records across sources or by making administrative records available to be linked to data collected via other means—is thus central to building administrative data infrastructure.
The Russell Sage Foundation Journal of the Social Sciences (2019)
Rachel Glennerster was interviewed by Robert Wiblin and Nathan Labenz at the 80,000 Hours website in “A Year’s Worth of Education for Under a Dollar and Other ‘Best Buys’ in Development, from the UK Aid Agency’s Chief Economist.”
I think actually RCTs [randomized controlled trials] should not be seen as looking at testing this specific program, they should be seen as testing big questions that can then influence policy. For example, you might test a specific project on education. A lot of the work on education has suggested that the most effective thing we can do in education is to focus on the learning within the classroom. It’s not about more money, it’s not about more textbooks, it’s not about . . . And that’s what governments spend their money on. They spend it on teachers and textbooks, mainly teachers. But more teachers doesn’t actually improve learning. More textbooks doesn’t improve learning. But that’s what the Indian government is spending their money on.
If you look at the data, just descriptive data, again, the power of descriptive data . . . within an average Indian classroom in 9th grade, none of the kids are even close to the 9th grade curriculum. They’re testing at somewhere between 2nd grade and 6th grade. No wonder they’re not learning very much, ’cause the teacher, the only thing that a teacher has to do by law in India is complete the curriculum, even if the kids have no idea what they’re talking about. So yes, you have RCTs testing very specific interventions; all of the ones that worked were ones that got the teaching down from the 9th grade curriculum to a level that the kids could actually understand. Now the lesson from that, the big lesson for the Indian government if they were ever to agree to this, is change your curriculum. That’s the biggest thing that you could do. Reform the curriculum and make it more appropriate to what children are doing. So yes, you’re testing little things, but you’re coming out with big answers.
80,000 Hours (2018)
Douglas Clement at the Federal Reserve Bank of Minneapolis has published a “William ‘Sandy’ Darity Jr. Interview: ‘If You Think Something’s the Right Thing to Do, Then You Pursue It’”:
On inequalities of wealth:
I’m absolutely convinced that the primary factor determining household wealth is the transmission of resources across generations. The conventional view of how you accumulate wealth is through fastidious and deliberate acts of personal saving. I would argue that the capacity to engage in some significant amount of personal saving is really contingent on already having a significant endowment, an endowment that’s independent of what you generate through your own labor. . . . I think these effects go beyond inheritances and gifts. I think it includes the sheer economic security that young people can experience being in homes where there is this cushion of wealth. It provides a lack of stress and a greater sense of what your possibilities are in life. . . . And if your family’s wealthy enough, you come out of college or university without any educational debt. That can be a springboard to making it easier for you to accumulate your own level of wealth.
On stratification economics:
Stratification economics is an approach that emphasizes relative position rather than absolute position. What’s relevant to relative position are two considerations: one, a person’s perception of how the social group or groups to which they belong have standing vis-à-vis other groups that could be conceived of as being rival groups. . . . This kind of frame as the cornerstone for the analysis comes out of, in part, the old work of Thorstein Veblen and also out of research on happiness. The latter increasingly shows that people have a greater degree of happiness if they think that they’re better off than whoever constitutes their comparison group rather than simply being better off; so it’s comparative position that comes into play. Conventional economics doesn’t start with an analysis that’s anchored on relative position, as opposed to absolute position; so I think that’s the fundamental shift in stratification economics. But also important to stratification economics is the notion that people have group affiliations or group identifications.
Federal Reserve Bank of Minneapolis (2019)
David Price interviews Enrico Moretti in Econ Focus, a publication of the Federal Reserve Bank of Richmond:
The explosion of the internet, email, and cellphones democratizes the access to information. In the 1990s, people thought it would also make the place where the company is located or where workers live much less important. . . . But what we have seen over the past 25 years is that the opposite is true: Location has become more important than ever before, especially for highly educated workers. The types of jobs and careers that are available in some American cities are increasingly different from the ones available in other American cities.
It’s a paradox because it is true that we can have access to a lot of information and communicate easily from everywhere in the world, but at the same time, location remains crucial for worker productivity and for economic success. In the first three decades after World War II, manufacturing was the most important source of high-paying jobs in the United States. Manufacturing was geographically clustered, but the amount of clustering was limited. Over the past 30 years, manufacturing employment has declined, and the innovation sector has become a key source of good jobs. The innovation sector tends to be much more geographically clustered. Thus, in the past, having access to good jobs was not tied to a specific location as much as it is today. I expect the difference in wages, earnings, and household incomes across cities to continue growing at least for the foreseeable future. . . . Thus, the concentration we observe in tech employment has drawbacks in the sense that it increases inequality across cities, but at the same time, it is good from the point of view of the overall production of innovation in the country. I see this as an equity-efficiency trade-off.
Federal Reserve Bank of Richmond (2019)