Deferred Acceptance is Unpredictable | b

Deferred Acceptance is Unpredictable

Nick Arnosti

2020/08/18

 

In February 2019, the Brookings Institution published The Opportunities and Risks of K-12 Student Placement Algorithms, which includes the following quote:

Even if DA algorithms are relatively simple, predicting how student assignment policies will affect enrollment and outcomes is difficult.

In this post, I explore both parts of this claim: what makes the Deferred Acceptance (DA) algorithm simple, and why is it difficult to predict the effect of using it?

Is Deferred Acceptance Simple?

I can think of several desirable ways in which Deferred Acceptance is simple:

Despite these advantages, DA is often characterized as a complicated and unintuitive “black box.” This post offers one possible explanation for this: the outcome of DA is unpredictable. By this, I mean that small changes to the input can have a large effect on the output.

An Example

Consider the example shown below. In a market consisting of only Schools A, B, C and students 1, 2, 3, each student gets their first choice – pretty good! Unsurprisingly, adding a fourth student can harm the first three students. Perhaps more surprisingly, in this example, adding student 4 causes students 1, 2 and 3 to each be assigned to their last choice. This occurs despite the fact that student 4 is not assigned to any school!1

Table 1: An example where adding student 4 dramatically affects the final assignment.
A: 2 3 4 1 1: A B C
B: 3 4 1 2 2: B C A
C: 1 2 3 4 3: C A B
4: A B C

This example can be generalized: for any \(n\), there is an example with \(n\) students and \(n\) schools where each student gets his or her first choice, but adding a new student results in each original student getting their \(n^{th}\) choice, and the new student going unassigned.2

Defining Predictability

In the preceding example, a “small” change (adding one student) had a “large” effect on the outcome (moving all other students from their first to their last choice). We could formalize this observation by defining the externality of one student to be the sum of the improvement in ranks experienced by other students if this student is removed from the market. So if removing student x moves student y from their third choice to their first, moves student z from their fourth choice to their third, and leaves the assignment of all other students the same, the externality of student x is \((3-1) + (4-3) = 3\).

The preceding example shows that in a market with \(n\) students and \(m\) schools, the externality of a student under DA can be as large as \(n \times (m-1)\). By contrast, the following result states that other commonly-studied mechanisms result in much smaller externalities.

Proposition. In a market with \(n\) students and \(m\) schools, when using Serial Dictatorship or the Boston Mechanism, the externality froma single student is at most \(m\).

\(\color{red}{\text{*Update, November 2022*}}\) Clay Thomas points out to me that this so-called “Proposition” is false for serial dictatorship. Here is a counter-example. There are n students and n schools. Students have “nearly common” rankings: each student prefers low-indexed schools, except that student \(i\) ranks school \(i\) first. In this case, everyone gets their first choice. If we add an additional student \(0\) who starts by taking school \(n/2+1\), this doesn’t affect the next \(n/2\) students, but increases the rank for each of the \(n/2\) remaining students by at least \(n/2\). This is an embarrassing mistake, but I guess it’s bound to happen sometimes, given that much of the point of writing a blog post (rather than a paper) is that I can share ideas more quickly with less careful analysis.

I believe that the claimed result is true for the Boston mechanism. Here is a proof sketch. To fix terminology, say that Boston proceeds over a number of rounds: in round \(k\), it tries to match as many unassigned students to their \(k^{th}\) choice as possible. Let \(S_k\) be the set of participants who participate in round \(k\) (that is, they are not matched in an earlier round). Then the sum of ranks of an assignment is \(\sum_k |S_k|\). Suppose that we add a student to the market. The externality of this student is equal to the sum of ranks in the new assignment, minus the new student’s rank in this assignment, minus the sum of ranks in the original assignment. In particular, if \(S_k'\) is the set of students who participate in round \(k\) after the new student is added, the student’s externality is upper bounded by \(\sum_{k} |S_k'| - \sum_k |S_k|-1\). I claim that for each \(k\), \(S_k \subseteq S_k'\) (the addition of a new student doesn’t help anyone else), and \(|S_k'| \le |S_k| + 1\) (it can cause at most one new displaced student in each round). From this, the upper bound of \(m\) follows. \(\color{red}{\text{*End Update*}}\)

Why Predictability Matters

Participants in matching markets often want to know what to expect. This is especially important during preference formation. Parents have to decide which schools to research, and medical students have to decide where to interview. Without good information about what options will be available, they may waste a lot of time and money exploring irrelevant alternatives. Historical data can offer guidance, but the examples in this post suggest that even relatively small changes from year-to-year (i.e. slightly fewer residency positions, or slightly fewer students applying) can render this information fairly useless.

The unpredictability of DA also matters to policymakers who must decide school priorities. It’s possible to construct examples similar to those above, showing that seemingly small changes to priorities can have large effects on the final outcome. Meanwhile, large changes to priorities can have almost no effect (for one real-world example, see this paper.)

Even academics whose full-time job is to study stable matching algorithms struggle to predict what outcomes will arise when they are deployed. For most practical questions, we resort to simulations, with theory following decades later, if at all. I’ve been working on a new model of stable matching, which makes accurate predictions while maintaining analytical tractability. Stay tuned for more!


  1. The fact that adding a student can harm others even if the new student goes unassigned may seem surprising: one might expect some sort of “irrelevance of unmatched students” property. In the academic literature this property is called “non-bossiness.” Although Deferred Acceptance is bossy (not non-bossy), other commonly-studied mechanisms such as Random Serial Dictatorship, Top Trading Cycles, and the Boston Mechanism are all non-bossy.↩︎

  2. These examples rely on carefully constructed preferences. It is natural to wonder whether such large changes are “typical.” A partial answer is given in this paper by Itai Ashlagi, Yash Kanoria and Jacob Leshno, which shows that adding one student to a balanced one-to-one market has a dramatic effect even when preferences and priorities are drawn uniformly at random.↩︎