« Back to Results

Mechanism Design and Artificial Intelligence

Paper Session

Saturday, Jan. 4, 2025 10:15 AM - 12:15 PM (PST)

Hilton San Francisco Union Square, Union Square 15 and 16
Hosted By: American Economic Association
  • Chair: Jann Lorenz Spiess, Stanford University

Buyer-Optimal Algorithmic Consumption

Shota Ichihashi
,
Queen's University
Alex Smolin
,
Toulouse School of Economics

Abstract

We study a bilateral trade model in which a product is recommended to a buyer
by an algorithm, based on the product's value to the buyer and its price. We fully
characterize an algorithm that maximizes the buyer's ex ante payoff and show that
it strategically biases consumption to incentivize lower prices. Under the optimal
algorithmic consumption, informing the seller about the buyer’s value does not
change the buyer's ex ante payoff but leads to a more equitable distribution of
interim payoffs.

Can AI Help Reduce Human Bias? Evidence from Police Rearrest Predictions

Yong Suk Lee
,
University of Notre Dame
Andrea Vallebueno
,
Stanford University

Abstract

Amidst growing concerns about racial bias in predictive algorithms, we explore how AI predictions can either mitigate or exacerbate human bias, particularly in the context of racial disparities in rearrest predictions. Our experimental study involved showing police officers the profiles of young offenders and asking them to predict rearrest probabilities within three years, first without and then after seeing the AI algorithm's assessment. The experiment varied the visibility of the offender's race (revealed to one group, hidden in another group, and mixed (some shown and some hidden) in the other group). Additionally, we explored how informing officers about the model's accuracy affected their responses. Our findings indicate that officers adjust their predictions towards the AI's assessment when the race of the profile is disclosed. However, these adjustments exhibit significant racial disparities, with a significant gap in initial rearrest predictions between Black and White offenders even when all observable characteristics are controlled for. Furthermore, only Black officers significantly reduced their predictions after viewing the AI's assessments, while White officers did not. Our result highlights a nuanced and only partially effective role of AI in reducing bias in recidivism predictions, emphasizing the complexities in AI-assisted human judgment within criminal justice.

Optimal Membership Design

Piotr Dworczak
,
Northwestern University
Marco Reuter
,
International Monetary Fund
Scott Duke Kominers
,
Harvard Business School
Changhwa Lee
,
University of Bristol

Abstract

Membership design involves allocating an economic good whose value to any individual depends on who else receives it. We introduce a framework for optimal membership design by combining an otherwise standard mechanism-design model with allocative externalities that depend flexibly on agents' observable and unobservable characteristics. Our main technical result characterizes how the optimal mechanism depends on the pattern of externalities. Specifically, we show how the number of distinct membership tiers---differing in prices and potentially involving rationing---is increasing in the complexity of the externalities. This insight may help explain a number of mechanisms used in practice to sell membership goods, including musical artists charging below-market-clearing prices for concert tickets, heterogeneous pricing tiers for access to digital communities, the use of vesting and free allocation in the distribution of network tokens, and certain admission procedures used by colleges concerned about the diversity of the student body.

Reputational Algorithm Aversion

Gregory Weitzner
,
McGill University

Abstract

People are often reluctant to incorporate information produced by algorithms into their decisions, a phenomenon called ``algorithm aversion''. This paper shows how algorithm aversion arises when the choice to follow an algorithm conveys information about a human's ability. I develop a model in which workers make forecasts of a random outcome based on their own private information and an algorithm's signal. Low-skill workers receive worse information than the algorithm and hence should always follow the algorithm's signal, while high-skill workers receive better information than the algorithm and should sometimes override it. However, due to reputational concerns, low-skill workers inefficiently override the algorithm to increase the likelihood they are perceived as high-skill. The model provides a fully rational microfoundation for algorithm aversion that aligns with the broad concern that AI systems will displace many types of workers.

The Shapley Value and the Nucleolus of a Two-Sided Platform Game

Jinglei Huang
,
Tsinghua University
Danxia Xie
,
Tsinghua University

Abstract

This paper introduces a new coalitional game with transferable utility — called a two-sided platform game. The participation of the platform user on one side benefits the other side, and the platform can be established if and only if there is more than one entrepreneur. Well-known point solutions and set solutions are investigated. It turns out
that the kernel and the nucleolus coincide, and both the Shapley value and the nucleolus have simple expressions. The paper sheds light on platforms and antitrust issues. When there is more than one platform entrepreneur, the utility share of each entrepreneur is relatively low in both point solutions.
JEL Classifications
  • D0 - General