« Back to Results

Theory of Learning

Paper Session

Saturday, Jan. 6, 2024 8:00 AM - 10:00 AM (CST)

Grand Hyatt, Travis D
Hosted By: Econometric Society
  • Chair: Daehyun Kim, POSTECH

Neutral Mechanisms: On the Feasibility of Information Sharing

Ernesto Rivera Mora
,
Yale University

Abstract

The paper analyzes information sharing in neutral mechanisms when an informed party will face future interactions with an uninformed party. Neutral mechanisms are mechanisms that do not rely on (1) the provision of evidence, (2) conducting experiments, (3) verifying the state, or (4) changing the after-game (i.e., the available choices and payoffs of future interactions). They include cheap talk, long cheap talk, noisy communication, mediation, money burning, and transfer schemes, among other mechanisms. To address this question, the paper develops a reduced-form approach that characterizes the agents’ payoffs in terms of belief-based utilities. This effectively induces a psychological game, where the psychological preferences summarize information-sharing incentives. The first main result states that if an expert's reduced form (i.e., belief-based utility) satisfies a weak supermodularity condition between the state and hierarchies of beliefs, then there is a neutral mechanism that induces complete revelation of the state. Moreover, it identifies a mechanism that is easy to implement. The second main result states that if the expert's reduced-form representation (i.e., set of belief-based utilities) satisfies a strict submodularity condition between the state and the hierarchies of beliefs, neutral mechanisms are futile for any (relevant) information sharing. This implies a limit in the ability to use neutral mechanisms for information sharing. The paper goes on to show how the approach is useful in applications related to political economy and industrial organization.

Slow and Easy: a Theory of Browsing

Evgenii Safonov
,
Queen Mary University of London

Abstract

An agent needs to choose the best alternative drawn randomly with replacement from a menu of unknown composition. The agent is boundedly rational and employs an automaton decision rule: she has finitely many memory states, and, in each, she can inquire about some attribute of the currently drawn alternative and transition (possibly stochastically) either to another state or to a decision. Defining the complexity of a decision rule by the number of transitions, I study the minimal complexity of a decision rule that allows the agent to choose the best alternative from any menu with probability arbitrarily close to one. Agents in my model differ in their languages—collections of binary attributes used to describe alternatives. My first result shows that the tight lower bound on complexity among all languages is 3⌈log_2(m)⌉, where m is the number of alternatives valued distinctly. My second result provides a linear upper bound. Finally, I call adaptive a language that facilitates additive utility representation with the smallest number of attributes. My third result shows that an adaptive language always admits the least complex decision rule that solves the choice problem. When (3/4) · 2^n < m ≤ 2^n for a natural n, a language admits the least complex decision rule if and only if it is adaptive.

Social Learning through Action-Signals

Wenji Xu
,
City University of Hong Kong

Abstract

This paper studies sequential social learning, in which agents learn about an underlying state from others' actions. In contrast to the classic models that have a network observational structure, agents arrive in cohorts and observe action-signals regarding previous cohorts' actions. I identify a simple, necessary, and sufficient condition for asymptotic learning, called "separability," which is a joint property of action-signals and agents' private information about the state. A necessary condition for separability is "unbounded beliefs" which require agents' private information to generate strong evidence of the true state, even if only with a small probability. With unbounded beliefs, separability is satisfied if action-signals have "double thresholds" so that at a minimum they reveal whether agents above a threshold number in each cohort choose actions below a choice-threshold. Without double thresholds, learning can be confounded so that agents' actions are forever nontrivially split among the available choices.

Weighted Garbling

Daehyun Kim
,
POSTECH
Ichiro Obara
,
University of California-Los Angeles

Abstract

We introduce and develop an information order for experiments that is based on a generalized notion of garbling called weighted garbling. An experiment is more infor- mative than another experiment in this order if the latter experiment is obtained by a weighted garbling of the former experiment. This notion can be shown to be equivalent to a regular garbling conditional on some event for the former experiment. We also characterize this order in terms of posterior beliefs and show that it only depends on the support of posterior beliefs, not their distribution. Our main results are two char- acterizations of weighted-garbling order based on some decision problems. For static Bayesian decision problems, one experiment is more informative than another in the weighted-garbling order if and only if a decision maker is guaranteed to achieve some weight-based fraction of the optimal expected payoff given the latter experiment for any decision problem. When the weighted garbling is a regular garbling, this lower bound reduces to the optimal expected payoff itself as the fraction becomes one, so this result generalizes the result in Blackwell (1951, 1953). We also consider a class of stopping time problems where the state of nature changes over time according to a hidden Markov process and a patient decision maker can conduct the same experiment as many times as she wants without any cost before she makes one-time decision. We show that an experiment is more informative than another in the weighted-garbling order if and only if the decision maker achieves a weakly higher expected payoff for any problem with a regular prior belief in this class.
JEL Classifications
  • D8 - Information, Knowledge, and Uncertainty