In February 2019, the Brookings Institution published The Opportunities and Risks of K-12 Student Placement Algorithms, which includes the following quote:

Even if DA algorithms are relatively simple, predicting how student assignment policies will affect enrollment and outcomes is difficult.

In this post, I explore both parts of this claim: what makes the Deferred Acceptance (DA) algorithm simple, and why is it difficult to predict the effect of using it?

# Is Deferred Acceptance Simple?

I can think of several desirable ways in which Deferred Acceptance is simple:

**Strategic Simplicity**It is in applicants’ best interest to list their true preferences, regardless of what others do.**Computational Simplicity**The algorithm runs quickly, even on large data sets.**Conceptual Simplicity**After the fact, decisions can be rationalized by “admissions cutoffs” for each school. Posting these cutoffs allows students to verify that they have been assigned to their favorite school where their priority exceeded the cutoff.

Despite these advantages, DA is often characterized as a complicated and unintuitive “black box.” This post offers one possible explanation for this: the outcome of DA is **unpredictable**. By this, I mean that small changes to the input can have a large effect on the output.

# An Example

Consider the example shown below. In a market consisting of only Schools A, B, C and students 1, 2, 3, each student gets their first choice – pretty good! Unsurprisingly, adding a fourth student can harm the first three students. Perhaps more surprisingly, in this example, adding student 4 causes students 1, 2 and 3 to each be assigned to their *last* choice. This occurs despite the fact that student 4 is not assigned to any school!^{1}

A: | 2 | 3 | 4 | 1 | 1: | A | B | C | |

B: | 3 | 4 | 1 | 2 | 2: | B | C | A | |

C: | 1 | 2 | 3 | 4 | 3: | C | A | B | |

4: | A | B | C |

This example can be generalized: for any \(n\), there is an example with \(n\) students and \(n\) schools where each student gets his or her first choice, but adding a new student results in each original student getting their \(n^{th}\) choice, and the new student going unassigned.^{2}

# Defining Predictability

In the preceding example, a “small” change (adding one student) had a “large” effect on the outcome (moving all other students from their first to their last choice). We could formalize this observation by defining the *externality* of one student to be the sum of the improvement in ranks experienced by other students if this student is removed from the market. So if removing student x moves student y from their third choice to their first, moves student z from their fourth choice to their third, and leaves the assignment of all other students the same, the externality of student x is \((3-1) + (4-3) = 3\).

The preceding example shows that in a market with \(n\) students and \(m\) schools, the externality of a student under DA can be as large as \(n \times (m-1)\). By contrast, the following result states that other commonly-studied mechanisms result in much smaller externalities.

Proposition.In a market with \(n\) students and \(m\) schools, when using Serial Dictatorship or the Boston Mechanism, the externality froma single student is at most \(m\).

Informally, under these other mechanisms, adding one student could either (i) slightly harm many other students or (ii) significantly harm a few other students, but not both.

# Why This Matters

Participants in matching markets often want to know what to expect. This is especially important during preference formation. Parents have to decide which schools to research, and medical students have to decide where to interview. Without good information about what options will be available, they may waste a lot of time and money exploring irrelevant alternatives. Historical data can offer guidance, but the examples in this post suggest that even relatively small changes from year-to-year (i.e. slightly fewer residency positions, or slightly fewer students applying) can render this information fairly useless.

The unpredictability of DA also matters to policymakers who must decide school priorities. It’s possible to construct examples similar to those above, showing that seemingly small changes to priorities can have large effects on the final outcome. Meanwhile, large changes to priorities can have almost no effect (for one real-world example, see this paper.)

Even academics whose full-time job is to study stable matching algorithms struggle to predict what outcomes will arise when they are deployed. For most practical questions, we resort to simulations, with theory following decades later, if at all. I’ve been working on a new model of stable matching, which makes accurate predictions while maintaining analytical tractability. Stay tuned for more!

The fact that adding a student can harm others

*even if the new student goes unassigned*may seem surprising: one might expect some sort of “irrelevance of unmatched students” property. In the academic literature this property is called “non-bossiness.” Although Deferred Acceptance is bossy (not non-bossy), other commonly-studied mechanisms such as Random Serial Dictatorship, Top Trading Cycles, and the Boston Mechanism are all non-bossy.↩These examples rely on carefully constructed preferences. It is natural to wonder whether such large changes are “typical.” A partial answer is given in this paper by Itai Ashlagi, Yash Kanoria and Jacob Leshno, which shows that adding one student to a balanced one-to-one market has a dramatic effect even when preferences and priorities are drawn uniformly at random.↩