« Back to Results

Effects of Artificial Intelligence

Paper Session

Saturday, Jan. 6, 2024 10:15 AM - 12:15 PM (CST)

Convention Center, 221C
Hosted By: American Economic Association
  • Chair: Jonathan Moreno-Medina, University of Texas-San Antonio

AI at the Wheel: The Effect of Autonomous Driving Features on Safety

Vikram Maheshri
,
University of Houston
Clifford M. Winston
,
Brookings Institution
Yidi Wu
,
Georgetown University

Abstract

Autonomous driving capabilities have become standard options on the vast majority of new vehicles sold in the US. These features are unique from other auto safety features in that they are explicit substitutes for driver attention. As such, we might expect Peltzman (1975) effects to offset some of the potential safety benefits that they offer. Using a comprehensive dataset on all registered vehicles in Texas from 2010 to 2018 and their accident histories, we find that Level 1 autonomous features reduce accident risk by roughly one-third. We exploit detailed information on the timing of introduction of these features in different vehicles to identify these effects using two complementary strategies that require distinct identifying assumptions yet nevertheless yield quantitatively similar results.

Can Dynamic Pricing Algorithm Facilitate Tacit Collusion? An Experimental Study Using Deep Reinforcement Learning in Airline Revenue Management

Chengyan Gu
,
Columbia University

Abstract

This study conducts a series of simulated experiments in which airlines use deep Q-learning (DQL) algorithms to dynamically control the class of quantity and price pairs offered to the market through the booking horizon to better understand the algorithmic collusion problem. We show that, in a monopoly market, DQL algorithm can learn stochastic demand without any prior knowledge and achieve optimal monopoly revenue. In a duopoly market, with limited information requirements, DQL algorithms can contribute to the same knowledge pool, coordinate airlines’ behaviors, learn to collude and share the monopoly profit equally. Compared with the expected marginal seat revenue (EMSR)-b heuristics, DQL algorithms are more adaptive for learning new demand stochasticity and are more likely to sustain collusive outcomes.

Coordinated vs Efficient Prices: The Impact of Algorithmic Pricing on Multifamily Rental Markets

Gi Heung Kim
,
University of Pennsylvania
Sophie Calder-Wang
,
University of Pennsylvania

Abstract

Algorithmic pricing can improve efficiency by helping firms set prices that are more responsive to changing market conditions. However, widespread adoption of the same algorithm could also lead to price coordination, resulting in elevated prices. In this paper, we examine the impact of algorithmic pricing on the U.S. multifamily rental housing market using hand-collected adoption decisions of property management companies merged with the data of market-rate multifamily apartments from 2005 to 2019. Our findings suggest that algorithm adoption helps building managers set more responsive prices: buildings with the software increase prices during booms but lower prices during busts, compared to non-adopters in the same market. However, we also find evidence that greater algorithm penetration can lead to higher prices, raising rents among both adopters and non-adopters in the same market. Such empirical patterns are consistent with either price coordination through the algorithm or widespread pricing errors before software adoption.

The Hidden Effects of Algorithmic Recommendations

Alex Albright
,
Federal Reserve Bank of Minneapolis

Abstract

Understanding how algorithmic systems change human decisions is crucial for policy. Algorithmic systems often provide both predictions and recommendations to decision-makers. While predictions and recommendations are distinct, their effects on decisions are rarely disentangled. I isolate the hidden effects of algorithmic recommendations by leveraging a setting where recommendations given to bail judges changed, but the algorithmic predictions available to them did not. I find that recommendations significantly impacted decisions, with lenient recommendations increasing lenient bail decisions by 55-70% for marginal cases. I explore possible mechanisms behind this effect and provide evidence that recommendations can affect decisions by changing the private costs of errors to human decision-makers. Finally, I show that variation in adherence to algorithmic recommendations complicates how algorithmic systems affect racial disparities. Judges are more likely to deviate from lenient recommendations for Black defendants than white defendants with identical algorithmic risk scores.

Unemployment Insurance Fraud in the Debit Card Market

Umang Khetan
,
University of Iowa
Jetson Leder-Luis
,
Boston University
Jialan Wang
,
University of Illinois-Urbana-Champaign
Yunrong Zhou
,
Purdue University

Abstract

We study fraud in the unemployment insurance (UI) system using a dataset of 35 million debit card transactions. We apply machine learning techniques to group cards into clusters corresponding to varying levels of suspicious or potentially fraudulent activity. We then conduct a difference-in-differences analysis based on the staggered adoption of state-level identity verification systems between 2020 and 2021 to assess the effectiveness of screening for reducing fraud. Our findings suggest that identity verification reduced payouts to suspicious cards by 40% relative to non-suspicious cards, which were largely unaffected by these technologies. Our results indicate that identity screening of new and continuing applicants may be an effective mechanism for mitigating fraud in the UI system and for ensuring the integrity of benefits programs more broadly.

Automating Automaticity: How the Context of Human Choice Affects the Extent of Algorithmic Bias

Amanda Agan
,
Rutgers University
Diag Davenport
,
Princeton
Jens Ludwig
,
University of Chicago
Sendhil Mullainathan
,
University of Chicago

Abstract

Consumer choices are increasingly mediated by algorithms, which use data on those past choices to infer consumer preferences and then curate future choice sets. Behavioral economics suggests one reason these algorithms so often fail: choices can systematically deviate from preferences. For example, research shows that prejudice can arise not just from preferences and beliefs, but also from the context in which people choose. When people behave automatically, biases creep in; snap decisions are typically more prejudiced than slow, deliberate ones, and can lead to behaviors that users themselves do not consciously want or intend. As a result, algorithms trained on automatic behaviors can misunderstand the prejudice of users: the more automatic the behavior, the greater the error. We empirically test these ideas in a lab experiment, and find that more automatic behavior does indeed seem to lead to more biased algorithms. We then explore the large-scale consequences of this idea by carrying out algorithmic audits of Facebook in its two biggest markets, the US and India, focusing on two algorithms that differ in how users engage with them: News Feed (people interact with friends' posts fairly automatically) and People You May Know (people choose friends fairly deliberately). We find significant out-group bias in the News Feed algorithm (e.g., whites are less likely to be shown Black friends' posts, and Muslims less likely to be shown Hindu friends' posts), but no detectable bias in the PYMK algorithm. Together, these results suggest a need to rethink how large-scale algorithms use data on human behavior, especially in online contexts where so much of the measured behavior might be quite automatic.
JEL Classifications
  • C6 - Mathematical Methods; Programming Models; Mathematical and Simulation Modeling