+21 votes
asked ago in General Economics Questions by (6.9k points)
To put it another way, what are some areas that seem important but tractable, and therefore might yield productive research questions that could lead to a paper and maybe more?

I’ll open the discussion with one potential answer, below, and I hope others will follow.
commented ago by (110 points)
To add to the last two posts:

Twenty-first century financial market design is a great area for dissertation research. There are lots of incremental changes in rules and practice, and many opportunities to better understand those changes  and to propose improvements. One example of which I am aware is Jun Aoyagi, a Ph.D. student at UC Berkeley, who has a paper on Strategic Speed Choice when the exchange imposes small delays. There surely are others already working on related topics, but I conjecture there is additional room for dozens of dissertations in this field.

The earlier suggestion about designing markets for personal data also strikes me as important and promising. The book by Posner and Weyl is useful background, and it frames the question nicely, but it doesn't really tackle market design. Individual users are numerous and tiny relative to giant firms like Amazon, Google and Facebook  on the other side of the market. I wonder whether there is a role for intermediaries to sign up large numbers of users and sell the bundled data at a competitive price. In any case, there seems to be room here for basic theoretical modeling, for specific applications and for everything in between.

17 Answers

+11 votes
answered ago by (6.9k points)
Here’s a general question I think is likely to be productive: what are the connections between marriage markets and labor markets?

Some relatively well-formulated mathematical questions related to this could be about the behavior of stable matching markets that contain two-career households, i.e. couples looking for two positions.  The set of stable matchings can be empty in this case (Roth, 1984), but, empirically, it seldom is.  

The state of the art on understanding why not is presently (probably) Ashlagi, Braverman and Hassidim, "Stability in Large Matching Markets with Complementarities,"   Operations Research, 62(4), 713-732, 2014.  https://web.stanford.edu/~iashlagi/papers/couplesFinal1.pdf
They show that the probability of an empty set of stable matchings goes to zero in the limit if the number of couples grows slower than the size of the overall market, but not if it grows linearly.  Of course it’s the latter case that is interesting, since we wouldn’t worry about this if we didn’t see markets with a significant proportion of couples. But we know little about this case.  If the proportion of couples is small, is the probability small that no stable matching exists?  How about if couples have similar preferences, e.g. if they prefer large cities?  How about if they have children so that their preferences over jobs always require them to be in the same location?  Etc….
 More speculative, less precisely formulated questions might concern bigger connections between career and marriage choices.  What kind of career choices do married couples have to make, and what kinds of professions and careers fit smoothly together in the same household? (I bet that not too many senior diplomats are married to each other, since being an ambassador is a very location specific job…)

There’s room for both theoretical and empirical work on questions like this.
commented ago by (240 points)
The couple problem is interesting. Al Roth, Parag Pathak, and I also had a paper somewhat similar to Ashlagi et al., but that paper also leaves the question about what one expects when there are many married couples. The paper by Biro et al.,
is quite interesting in this regard. They report that when there are many couples, some algorithms fail to find a stable matching, but the existence still holds and can be found by their algorithms. Given their results and the impossibility of Ashlagi et al. which Al's post mentions, it seems that we are still away from really understanding the problem well...
commented ago by (6.9k points)
Jack Ochs points out to me by email that, while we're thinking about marriage and labor markets, it might also be worth studying divorce and labor turnover.
commented ago by (160 points)
Two comments  on the couples problem:

1. The Nguyen-Vohra paper on couples shows that there always exists a stable matching for adjusted quotas, where the new capacity of each hospital differs fom the original capacity by at most 3, and the total number of positions is also roughly the same (the total difference is at most 9). This is a very nice result, with an advanced technique (rounding the fractional solution by the Scarf-lemma). I am wondering whether a closer bound can be given, e.g. whether there is always a stable matching when the capacity of each hospital can be modified by at most 1 (or increased by at most 1)?

2. For labour markets there are several papers based on the Shapley-Shubik assignment game (or actually the Koopmans-Beckmann 1957 Econometrica paper), where transfers (wages) are allowed, but I have not seen any where couples would also be present (although this is quite a reasonable situation in reality). This can be seen as the assignment game with externalities, where the negative externality for a pair of positions can be correlated with the distances between the workplaces. Or is there any literature on this? I only know a general paper on assignment game with externalities (https://link.springer.com/article/10.1007%2Fs40505-017-0117-4)
+6 votes
answered ago by (280 points)
Here's one research question that I think fits:  What is the virtue of price discovery in multi-round auctions?

In auctions with many heterogeneous objects, the received wisdom is that bidders should be able to observe prices and revise their bids.  For spectrum auctions, this has motivated the use of simultaneous multi-round ascending auctions and combinatorial clock auctions.  Why is this a good thing?

While the use of multi-round procedures can raise revenues for single objects with affiliated values (Milgrom and Weber 1982), much less is known about the case of many objects.  In particular, having many objects seems to raise new benefits from price discovery - and new obstacles requiring activity rules.  Given that we hardly expect real bidders to play a Bayes-Nash equilibrium in complex spectrum auctions, is there a formal sense in which multi-round procedures are generally good for revenue, or efficiency?

And, if we had a formal criterion in hand, would it select the current state-of-the-art?  (The SMRA and the CCA.) Or would it recommend something else entirely?
+5 votes
answered ago by (260 points)
In addition to the specific questions above, the emergence of internet markets has given economists new playgrounds to weigh in on important and long-standing questions in the literature. What's even better is that economists are being called upon to actively design these nascent markets. I think there's a huge space to consider how these issues may extend to new markets, and how we can help answer existing puzzles. Here are a couple of examples:

I recently read a great paper by John Horton (can be found at http://john-joseph-horton.com/papers/minimum_wage.pdf) on the effects of a minimum wage increase. There have been many papers about this topic, weighing in on the potential trade-off between lower hiring and higher wages for those hired. Horton, through an experiment, shows that due to productivity information that exists on an online labor platform, employers substitute away from hiring low-productivity, cheaper workers and toward higher-productivity, more expensive workers. In other words, there is a labor-labor substitution. He finds that there is slightly less hiring, but the more drastic change is along the intensive margin: because hired workers are more productive, they complete the job faster. These are great points that I don't think are easy to observe in many "traditional market" settings.

Another recent paper that I read is by Bohnen, Imas, and Rosenberg (https://static1.squarespace.com/static/57967bc7cd0f68048126361d/t/5b6cf8cc0ebbe8b4055441e9/1533868241208/BohrenImasRosenberg_DiscriminationJuly2018.pdf). This paper studies gender discrimination in an online market for answering math questions. The authors leverage dynamic effects--how having more reviews impacts the way that subsequent discrimination happens--to investigate whether discrimination (which they
 do observe) is due to tastes or by incorrect beliefs. They find the latter. This paper combines a field experiment and some very nice theory to answer a question that is tremendously important, that is also very difficult to study without detailed and precise reputation measures.

In both cases, these are questions that others have investigated before. But there's plenty of room for clever young economists to use different kinds of tools to weigh in on fundamental questions of our field in these new markets.
+5 votes
answered ago by (240 points)
I've been interested in the interactions between matching/allocation mechanisms and incentives for the participants to  invest in their quality. For example, what do we know about whether school choice incentivizes schools to improve or is there any sense in which the incentive backfires?

In the Gale-Shapley-Roth type framework, the basic paper on an issue like this by Balinski and Sonmez (1999, JET). They define what it means for a mechanisms to "respect improvements" of students' quality, which can be interpreted as the requirement that a mechanism doesn't punish student in the matching stage for becoming better. I have a couple of papers studying similar issues and in particular one of the papers use a notion in Balinski and Sonmez in the context of schools' incentives to improve education quality. But I feel that no much has been done, especially compared to the importance of the subject. I wish I had more ideas on this topic to write new papers on this topic myself, but for now, it would be great if someone writes a paper on it...!
+7 votes
answered ago by (1.2k points)
I think there are lots of interesting questions around the design of marketplaces, like AirBnb, TaskRabbit, and so on.  These marketplaces use their ranking algorithms as incentive schemes to induce suppliers to use instant booking, to accept bookings quickly, etc.  They face questions about how to select and prioritize suppliers for jobs, taking into account the fact that suppliers need to expect sufficient future benefits to be incentivized to provide good service, etc.  There are also lots of interesting questions about whether to let suppliers set prices on their own, whether to constrain or guide those prices, and how much to weight price in rankings.  Spend some time on the websites of these marketplaces, look at all the market design choices, and also try signing up as a supplier.  Think about the decisions, and ask what you might change.  Many of the marketplaces are willing to work with academics so you may get a chance to write a paper with their data or give them advice, and learn what matters in practice in the process!
commented ago by (210 points)
There are many fascinating questions around the design of online marketplaces. From a practical perspective, it can at times be challenging to get access to data for empirical projects. For people thinking about how to get started... There are two flavors of data collection:

The first is to collaborate with a company, through data sharing and / or an experiment. Data sharing is obviously much easier, although collaborating on experiments is also feasible and can be very interesting. Many online marketplaces, including eBay, TaskRabbit, Upwork, BlaBlaCar, Wayfair, and Airbnb, have worked with doctoral students and postdocs in the past. After you have a research idea (Susan's suggestions above are very useful for this), it can be helpful to write a short proposal explaining what you want to do, what you - and they - will learn from it, and what data / experimental design features are critical for your project to be a viable research project. This can help to figure out whether or not there is mutual interest, and to avoid late-in-the-game surprises. There are also different arrangements companies have for how to share data. For example, the researcher can work on site for a period of time; this can make it easier to learn about the data, and interact with people within the company as questions and issues arise. It can also provide a sense of the challenges the company is thinking about. In other situations, companies just send a data set or company laptop.  

The second approach is to collect data on your own without the company's involvement. This often starts with collecting public-facing data from the platform (i.e. many papers used scraped data). This can also include running your own experiments on the platform -- e.g., audit studies. This can be especially useful in situations where a company may be hesitant to share internal data -- e.g. topics that are policy relevant but sensitive for the company.

In practice, there are many good examples of both types of projects.
commented ago by (140 points)
These are great suggestions - to elaborate on Mike's ideas about pitching to the company:

1) Try to meet with them to get a sense of the big questions the company is wrestling with---and potentially acting on---before you pitch a research project. It's much easier to push a rock that's already rolling than to try to get something brand new started. As you learn what they care about, you might realize that you were initially asking the wrong question.  

2) Related to (1), you want to find a research project that is both academically interesting *and* something the company will care about getting an answer to. Fortunately, most market design stuff is a pretty sell on this dimension. But times when I've been on the company side fielding proposals from academics, I've been amazed at how often the proposal doesn't speak at all to the company's interests.

3) In some cases, it's better to first get your foot in the door before agreeing to a specific project---learn the data, learn what experiments already exist, talk to people, find out what the company is open to talking about---and then pitch. All you need upfront is that they are broadly interested in an academic collaboration.  This lowers the initial risk and allows them to learn about you and hopefully, trust you. Talking about your rights as a researcher/hashing out a research agreement before they know you is a little like haggling over a pre-nup agreement on a first date.  That being said, eventually, you need to sign a real research agreement that lays our your rights, but I think that can safely come later.
commented ago by (350 points)
@Mike: could you link to a paper that uses scraped data? That sounds interesting.
commented ago by (210 points)
To give an example of a paper using scraped data: there's a great paper by Dina Mayzlin, Yaniv Dover, and Judy Chevalier that uses scraped data from TripAdvisor and Expedia. They focus on a platform design choice that differs across two platforms with online reviews: per the paper, Expedia only allowed you to leave a review for a hotel if you booked through Expedia while TripAdvisor allowed you to leave a review without verifying your stay.

This design choice, in principle, makes it easier to leave fake reviews on TripAdvisor relative to Expedia, since you can leave a review without necessarily having booked a room. Consistent with this, the paper presents evidence that independent hotels (which they argue have higher powered incentives to leave fake reviews) tend to have higher ratings on TripAdvisor, relative to Expedia. They also find evidence that hotels with a nearby independent competitor tend to have more negative reviews on TripAdvisor, relative to Expedia - suggesting that independent hotels may be leaving negative reviews on TripAdvisor for their competitors. Overall, the results suggests that TripAdvisor's design choice may be leading to more fake reviews. TripAdvisor's design choice likely also leads to more real reviews as well, which is one reason why they may keep this design despite its limitations.

Here's the paper: https://www.aeaweb.org/articles?id=10.1257/aer.104.8.2421

In addition to being an interesting use of scraped data, the paper highlights another good starting point for thinking about the design of online platforms: looking across the industry, different platforms often make different design choices. It can be interesting to think about why they reach different decisions.
+3 votes
answered ago by (310 points)
In a classical two-sided matching market where agent strictly rank objects, it is well-known that impossibilities may arise, e.g., that strategy-proof mechanisms cannot always select stable marriages (Roth, 1982) and that non-manipulability is incompatible with individual rationality and  efficiency (Sönmez, 1999). This has motivated research on restricted preference domains. One of the best known examples is the dichotomous preference domain that was popularized by Bogomolnaia and Moulin (2004) and Roth et al. (2004). This domain is also natural in some applications like kidney exchange where patients typically classify donors as either “acceptable” or “unacceptable”.

However, in some applications, it may be natural to consider other domains than the above mentioned. A recent example of such domain is the trichotomous. This domain has a very natural interpretation in exchange systems where indivisible items are reallocated among agents. For example, workers in many different professions are engaged in shift work but would sometimes like to exchange their assigned time slots with other agents. It can be argued that such preferences are trichotomous in the sense that the agent prefers any acceptable time slot over any unacceptable time slot but prefers any unacceptable time slot that she is endowed with over any unacceptable time slot in other agents’ endowments. This problem is considered in a recent paper by Manjunath and Westkamp (2018, https://docs.google.com/viewer?a=v&pid=sites&srcid=ZGVmYXVsdGRvbWFpbnxhY3dlc3RrfGd4OjE5NzY1MTE5OTA4MTQ0NTk). Another example is the introduction of immunosuppressive medications in kidney exchange where patients naturally define donors they can be compatible with only after undergoing an immunosuppressive protocol as strictly better than unacceptable donors but not as good as acceptable donors (Andersson and Kratz, 2016, https://ideas.repec.org/p/hhs/lunewp/2016_011.html).

Problems defined on the trichotomous domains are not yet well-studied and there is clearly room for more work in this area – especially since there are many real-life applications where this domain is quite natural.
+3 votes
answered ago by (280 points)
One of the topics that I don't think we've really understood is the connection between matching markets, auctions, and competitive equilibrium (with indivisibles) when there are complementarities or other complex constraints (e.g. minimum quota constraints, knapsack constraints etc).

What we know is the relationship between matching, tatonnement, auctions, and competitive equilibrium existence established by Kelso and Crawford (1982) breaks down with complementarities.

This really matters for market design. Tons of cool and important problems involve complements and weird constraints (e.g. spectrum, electricity markets, ride-sharing etc.) as Paul Milgrom showed in his recent book "Discovering Prices". Auction theorists usually deal with them by running package/combinatorial auctions, but we don't have a coherent theory of when they should work well. In his book, Milgrom develops a really interesting theory of "near-substitutes" and gives bounds for when running simple auctions would get us close to an efficient outcome.

Nguyen and Vohra (forthcoming) beautifully showed how you can deal with small complementarities (couples) in matching via capacity adjustment using Scarf's Lemma. Another lovely paper by Baldwin and Klemperer (2018) that showed that competitive equilibrium can exist when there are complementarities. But we still don't really know either the complete theory or how to tackle these problems from a market design perspective.

So this topic is an open field for pure theorists, applied theorists interested in particular designs, and even empiricists and practically-minded economics would want to try things out in real marketplaces.
+4 votes
answered ago by (220 points)
I've long been interested in how the allocation problems in market design connect to other economic problems.  We've made a lot of progress in understanding how different ways of allocating resources might be more efficient or more fair, and there is still a lot to figure out about those things.  But what if an efficient allocation is not actually efficient when we think about the next stage?  e.g., the new FCC Incentive Auction is a major achievement for market design, but do we know that consumers will pay lower prices for wireless services because of the design?  e.g., does improving school allocation mechanisms to better reflect parental demand (and not strategic issues) actually lead to increased student achievement?

I think these are hard questions, but worthy ones.  Some progress in this direction will require integrating allocation problems into larger models of economic activity.  I've tried to do that in a paper with Chris Avery (linked here: https://economics.mit.edu/files/14472), where we consider school assignment and residential choice at the same time.  To keep the model tractable, we had to simplify the school choice side, but we find situations where choice improves the average quality of low performing schools, but does not necessarily benefit low types.    

I also think some of these questions require a back-and-forth between theory and empirics.  For instance, there has been a lot of recent work trying to understand patterns of school demand, and an active debate on whether parents can be effective consumers in education markets.  One paper we've worked on looks at demand patterns in NYC and argues that parents demand school achievement levels (as opposed to value-added) --  https://economics.mit.edu/files/14573.   This may have perverse consequences for school incentives: they may invest in screening, as opposed to improving their productivity.   On the other hand, a very interesting paper by Diether Bruerman and Kirabo Jackson looks at girls in Barbados and finds that they preferred schools reduce that teen motherhood, increase educational attainment, increase earnings, and improve health.   I think there is a lot of room to explore these issues further.
+3 votes
answered ago by (6.9k points)
Andersson and Teytelboym  both suggest interesting directions for future work, but neither of them mentioned their own exciting work on refugee resettlement, which is a big, malfunctioning matching market. (An op-ed I wrote in 2015 was headlined "Migrants aren’t widgets," the point being that migrants make choices, and have big strategy sets, so they can't successfully be kept where they don't want to be... https://www.politico.eu/article/migrants-arent-widgets-europe-eu-migrant-refugee-crisis/ ).  

The situation for refugees and migrants hasn't notably improved since then (and has in many ways gotten worse), but there's been good work by Alex and Tommy and others on thinking how to match refugees who have already been granted asylum to towns and housing in their new country.

The problem of resettling migrants looks likely to be with us for a long time, and may get much bigger in the coming century, if sea levels rise substantially.  Even when particular crises subside, we should be getting ready for the coming ones.  Market designers can help us get ready...
commented ago by (310 points)
Thanks Al for drawing attention to the work on refugee matching by Alex and myself (and others). Earalier today, me and Alex posted a new working paper on the subject (joint with Andrew Trapp, Alessandro Martinello and Narges Ahani, see https://swopec.hhs.se/lunewp/abs/lunewp2018_023.htm). This paper describes Annie MOORE (Matching and Outcome Optimization for Refugee Empowerment) - the first software in the world that helps resettlement agencies to optimize their initial placement of refugees within host countries. What may be more interesting for readers in this thread is the conclusions section in the paper where we discuss several directions for future work such
as incorporating multiple objectives from additional integration outcomes, dealing with equity
concerns, evaluating potential new locations for resettlement, managing quota in a dynamic fashion,
and eliciting refugee preferences. I believe that all these subjects are dissertation-worthy topics in market design.
commented ago by (6.9k points)
That’s a good name for your algorithm, although I had to look in your paper to discover who Annie M. Was.
+1 vote
answered ago by (160 points)
edited ago by
The role of cutoff scores. Cutoff scores are used in many European countries in nationwide college admissions (e.g. Hungary, Ireland, Spain, see http://www.matching-in-practice.eu/) Students apply for university programmes and set their preferences, the universities set their quotas on the programmes, the students are ranked by the universities based on their scores, that are coming from exams, interviews (and sometimes also from affirmative action, such as extra scores in Hungary for young mothers, disabled people or economically disanvantaged students). The matching is typically computed by some Gale-Shapley type algorithm and the cutoff scores are publicly announced, which also implies the allocation (i.e. every student is admitted to the top choice on her list where she achieved the cutoff).

One question, the connection between cutoffs and stable matchings has already been studied in the Azavedo-Leshno JPE paper (A Supply and Demand Framework for Two-Sided Matching Markets) for large markets. Another interesting question is that in case of special features (such as the case of ties, lower and common quotas, and paired applications in the Hungarian system) a stable solution may not exist. However, the cutoff scores do actually create fair matchings (i.e. matching with no justified envy, but potentially with leaving some seats empty). Fair matchings have been very recently studied by Wu and Roth (https://www.sciencedirect.com/science/article/pii/S0899825618300022) and also by Kamada and Kojima (https://www.dropbox.com/s/nof48u5lu1x1m3k/fair_matching127.pdf?dl=0). Fair matchings may also exist when stable matching does not, and the wastefulness of the solution may not be such a big issue, as the students are not aware of the fact that some seats were left empty (and the university quotas on their programmes are quite random anyways). So, I believe that the study of fair (envy-free) matchings, i.e. those obtained cutoff scores, would be really interesting in two-sided settings with some complications, and also in real applications.

My other question related on the cutoff scores is concerned with the strategyproofness of the mechanism. In the literature of two-sided matching markets strategyproofness was always considered as a main requirement of the mechanism (see e.g. the New York school choice case, where the idea of improvement cycles were rejected due to its manipulability). College-proposing deferred acceptance algorithm was also criticised in many papers, yet  this was used in Hungary until 2007, still in use in Turkey (as I know), and just this year they also introduced an iterative version of this mechanism in France for allocating 800000 students to universities. Is this really a big issue that the colleges are proposing and thus the mechanism is manipulable in theory? As the Roth-Peranson paper showed first in a real appplication (NRMP), and later it was proved theoretically in some large market papers, the difference between the student and college-optimal solutions is marginal, so the possibility of a successful manipulation is very slim. Moreover, this manipulation (i.e. leaving out the last application(s)) can be very harmful for the student, so in the Bayesian sense it is not reasonable to manipulate the college-proposing DA. I think that cutoff scores can give another intuition for this phenomenon, as the students may believe that they cannot influence the cutoff scores by their individual applications (which is 99% true actually), and so when they believe in this, it is indeed obvious for them that submitting their true preferences is their best response. Can cutoff scores make DA (both versions) or other solution concepts leading to fair matchings, obviously strategyproof? Maybe not in the clear sense that Li defined this concept (https://www.aeaweb.org/articles?id=10.1257/aer.20160425), but in a Bayesian sense?
+3 votes
answered ago by (310 points)
Someone interested in Machine Learning might think about "How can Machine Learning (ML) help design better marketplaces"?

Answering this question can take many different forms. In some instances, ML might be used to improve some informational deficiencies of a marketplace. In other instances, an ML algorithm might be used to find a better market mechanism, or it might even be integrated into the mechanism itself. Many directions can be taken.

There has already been some prior work on this question, for example:
1. Susan Athey has written on the impact ML will have on Economics more generally, including on market design specifically: http://www.nber.org/chapters/c14009.pdf

2. Paul Milgrom and Steve Tadelis have a paper on the impact of AI and ML on market design: http://www.milgrom.net/sites/default/files//AI%20and%20Mkt%20Design%20Final_0.pdf

3. David Parkes and co-authors have written on how to use Deep Learning to design new (optimal) mechanisms in markets with money and without money.
a) https://econcs.seas.harvard.edu/publications/optimal-auctions-through-deep-learning
b) http://econcs.seas.harvard.edu/files/econcs/files/narasimhan_ijcai16.pdf

4. I have recently thought about how machine learning algorithms can be "integrated" into a market mechanisms, to design better market mechanisms (with Gianluca Brero, Ben Lubin, and Sebastien Lahaie):
a) http://www.ifi.uzh.ch/ce/publications/CAs_via_ML_Elicitation_Brero_et_al_IJCAI_2018.pdf
b) http://www.ifi.uzh.ch/ce/publications/FastCAsViaBayes.pdf

All of these research agendas are brand new, and lots of theoretical, empirical an experimental work remains to be done!
+2 votes
answered ago by (260 points)
1. The set of stable matchings revisited:
Early works on matching markets showed various algebraic properties of stable matchings, but otherwise gave the impression that there are "many" stable matchings, and therefore "many" stable mechanisms and it is interesting to study them (think about men-proposing vs women proposing, strategies and equilibria etc.). More recent works (notably Immorlica-Mahdian, Kojima-Pathak, Ashlagi-Kanoria-Leshno, and other works in that spirit) stressed that in "likely" markets, the set of stable matchings is"small". This is according to the economic intuition that in large markets nobody is that important as to be able to significantly affect thr market outcome (or "the price"). Lately, at least two papers indicated that this intuition is useless when there is some regularity in the market, as players may have some (local) leverage over others. The first paper is by Dur et al. on unintended consequences of designing affirmative action in Boston. While this is not how they describe their results, one can certainly think about the indifference between regular slots and affirmative action slots as a regularity that creates multiplicity of stable matchings. The second paper is by Hassidim, Shorrer and myself. We show that in matching markets with contracts, multiplicity is also likely, and the sense in which the core is large depends on the substitutabilty condition that the many side's preferences satisfy.
The natural direction is finding those kind of regularities and how they affect the possible market outcomes. How to measure the size of the core, and so forth.

2. Behavioral aspects of market design: already discussed by Dorothea on the behavioral/experimental thread. But I strongly support her view that this research is important.

3. Looking at market design features that are less algorithmic but have crucial impact on the success of markets:
Instead of thoroughly explaining this one, I'll give a few examples. However, there is a work in progress that will provide a clear justification for studying such features and what should we focus on. So my examples are:
What is a good mechanism for the secondary market? In particular, what is the correct way to manage waiting lists?
How to promote voluntary participation?
What is the right amount of information to display to participants on both sides? In particular, how and whether to present results of past matchings / cutoffs and so forth?
What is the correct way to switch from an existing mechanism to a new one?
What features of the GUI are likely to elicit more truthful responses?
How to bundle choices for participants?
Should signals be added on top of the system?
Should the mechanism be completely transparent to participants?
And so on. Some of these questions have already been addressed to some extent. Again, the common denominator is that all these questions are not necessarily algorithmic, and are often considered marginal, but in fact have crucial importance in any application of market design.
+2 votes
answered ago by (330 points)
It was enjoyable to read all these informative comments above. :-)

Much of the real-world market design research focuses on fixing or replacing the existing markets when there is a problem or at the request of policymakers or institutes. Perhaps one empirical research trend is to move from reactive approaches to more proactive ones. (I am not trying to change market design's normative approach, although our approach is not yet perfect; e.g., Hitzig (2018) points out the Boston school choice's normative gap from social justice, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3242882 ). I think practical market designers (not pure theorists) can make greater contributions by rebalancing the focus from reaction to prevention/invention, and from the inward critical thinking to the outward entrepreneur thinking (e.g., https://people.stanford.edu/athey/sites/default/files/economists_in_tech.pdf ).

The new paper released two days ago by Alex, Tommy, and their team on Annie MOORE is an excellent example of the proactive way of research (https://project.nek.lu.se/publications/workpap/papers/wp16_11.pdf ). We can expect that the software's long-term significant impact will improve the living conditions for refugees. And Tommy has suggested dissertation-worthy topics in his comments above.

Al has pointed us in promising new directions in his comments above. His views are long reaching into the coming century, :-D, but he has not mentioned his own exciting work on the Global Kidney Exchange Program (https://paireddonation.org/ ). This program also exemplifies the proactive approach. We can expect this program to involve more countries over the coming decades, but how we can achieve an optimal outcome from the long-run global perspective is still under-explored. In practice, like Annie, this is a complex system that we need to solve. How do we price surgery for a particular patient, given her demographic, health, and economic conditions? How do we design the insurance scheme? Which hospital should she go to for the operation? If she has to be transferred to a hospital abroad, and if it is urgent, shall there be passport-free travel? How long should she wait? Which country and from whom should she receive the kidney? How should we compensate the donor? What is the trade-off between life-expectancy and life-quality (or compatibility of the kidney)? Etc. In such a high-dimensional design space, perhaps the block-chain based data equipped with artificial intelligence could powerfully help us.

Eric Posner and Glen Weyl's book Radical Markets (http://radicalmarkets.com/ ) gives another nice example for the proactive approach. For instance, it dedicates the whole of chapter 5 to the idea of "data as labor," redefining the economic relationship between ourselves and services like Facebook. (The other chapters are also eye-opening, offering innovative reform proposals on properties, voting, immigration, and corporate governance. The authors envision a future that is far better than what we have today. They also developed an APP called "weDesign" for quadratic voting.) How we design such a data labor market in the real world is worth exploring. How do we motivate data workers to provide quality data, and help them navigate the complexities of digital systems without overburdening their time? (this might also relate to the trichotomous preference mentioned by Tommy above?). It involves designing contracts as well as digital platforms, providing quality certification (probably using AI/ML methods), offering career development advice, balancing the bargaining power between monopsony data giants and data workers, and so forth.

Admittedly, there is always a gap between theory and practice, as so many politicians and policymakers are trapped by a small-bore mentality, and many designs face challenges from social norms and social justice. However, for the open, free, and market-friendly internalism to prosper, small fixes will not do. It is time for visionary radicalism at the center.

To conclude, the main messages I want to convey are: Practical market design can be taken place at the local, national and global level, but it would be beneficial if we always have a global mindset. The world needs more entrepreneur-minded market designers, as well as more market-design-minded entrepreneurs.

p.s. I will start to work as a researcher in a job-matching tech company next week, so it is really nice to see Bobby, John, Susan, and Mike's advice on online markets in this thread. They are very helpful to me :)
commented ago by (6.9k points)
Good luck in your new job! Job matching is important...
0 votes
answered ago by (140 points)
edited ago by
A Decentralized Alternative to the Order Book

Market design, indeed all design related to computers, is coupled tightly to the computer technology itself.  Just because one design is associated with a particular computer technology does not mean that the same design should be mindlessly carried over when the computer technology changes.

Until the 1970s, stock exchanges were characterized by a market design involving traders gathering around pits with specialists manually matching bids and asks in paper order books.  Today that has been computerized using a centralized client-server architecture.

In the last several years, there has emerged a new decentralized, peer-to-peer (p2p) paradigm in computer architecture propelled by several trends — Internet of Things (IoT), autonomous vehicle-to-device (v2x) communication, and crypto.

This change demands a rethinking of the appropriateness of the centralized client-server order book market design.

We believe that publish-subscribe will be the leading protocol of the transaction layer of emerging crypto-economic trading platforms.

 It has already been deployed at scale as the platform behind several MMO games (from MZ) and chat platforms (WhatsApp from Facebook, WeChat from TenCent).

 What is needed is an innovative p2p market design.  It could be along the lines of a many-to-many, high frequency “take it or leave it” (TIOLI) publish-subscribe mechanism which could also be described as a discrete time, many-to-many, high frequency version of the Myerson auction.

Is an efficient outcome possible?  If so, what bid-ask informational display is required?  Could an ephemeral (Snapchat) bid-ask design be efficient?

See my outline of the problem at

commented ago by (6.9k points)
Stock exchanges and order books certainly do look like good topics for potential redesign.  
Eric Budish, Peter Cramton and John Shim have some interesting thoughts on this in their 2015 QJE paper on frequent batch auctions as a response to very high frequency trading against an order book... https://faculty.chicagobooth.edu/eric.budish/research/HFT-FrequentBatchAuctions.pdf
+1 vote
answered ago by (6.9k points)
New York City has recently announced a replacement of the second round of the school choice system in use for the last decade and a half, with an incompletely described system of waiting lists. Here are some links and questions: https://marketdesigner.blogspot.com/2019/08/waitlists-in-nyc-school-choice-early.html
This is an opportunity for market design theorists to think about what is likely to happen, and to propose some designs that fill in the missing details in ways that might be productive.  In the fullness of time (e.g. by around this time next year) there may even be some data on what they did and how it went.