Beyond-Bayesian Persuasion
Paper Session
Saturday, Jan. 4, 2025 8:00 AM - 10:00 AM (PST)
- Chair: Piotr Dworczak, Northwestern University
Constrained Data-Fitters
Abstract
We study the likelihood maximization and updating of an agent subject to misspecification and updating frictions. The frictions may reflect computational limitations, cognitive constraints, or behavioral biases. We use a framework reminiscent of machine learning to jointly characterize the agent's estimation and updating. In the absence of frictions, the frame-work simplifies to familiar maximum likelihood estimation and Bayesian updating. We demonstrate that, under certain intuitive cognitive constraints, simpler models yield the most effective constrained fit to the actual data-generating process--more complex models could potentially offer a superior fit, but the agent may lack the capability to assess this fit accurately. With some additional structure, the agent's problem is isomorphic to a familiar rational inattention problem. We again derive a simplicity result, identifying circumstances under which the agent attaches positive probability to only a limited number of values of the latentvariable.
Information Design with Unknown Prior
Abstract
Classical information design models (e.g., Bayesian persuasion and cheap talk) require players to have perfect knowledge of the prior distribution of the state of the world. Our paper studies repeated persuasion problems in which the information designer does not know the prior. The information designer learns to design signaling schemes from repeated interactions with the receiver. We design learning algorithms for the information designer to achieve no regret compared to using the optimal signaling scheme with known prior, under two models of the receiver's decision-making. (1) The first model assumes that the receiver knows the prior and can perform posterior update and best respond to signals. In this model, we design a learning algorithm for the information designer with $O(\log T)$ regret in the general case, and another algorithm with $\Theta(\log \log T)$ regret in the case where the receiver has only two actions. (2) The second model assumes that the receiver does not know the prior and employs a no-regret learning algorithm to take actions. We show that the information designer can achieve regret $O(\sqrt{\mathrm{rReg}(T) T})$, where $\mathrm{rReg}(T)=o(T)$ is an upper bound on the receiver's learning regret. Our work thus provides a learning foundation for the problem of information design with unknown prior.JEL Classifications
- D81 - Criteria for Decision-Making under Risk and Uncertainty
- D83 - Search; Learning; Information and Knowledge; Communication; Belief; Unawareness