You are on page 1of 23

Mechanisms that Solve the Assignment

Problem with Ordinal Preferences

Maastricht University
School of Business & Economics
Place & date:

Maastricht, 08-06-2015

Name, initials:

Derek van den Elsen, DJA

ID number:

I6052872

Study:

Econometrics & Operations


Research

Course code:

2014-600-EBS2044

Supervisor name:

Andre Berger

Writing

Bachelor Thesis

assignment:
UM email address: d.vandenelsen@student.maastrichtuniversity.nl

1. Introduction

Optimal allocation of indivisible goods among individuals is one of the core issues of
economics. Normally this is addressed using well-established concepts like markets and
auctions, where individuals receive goods in exchange for transfers. However in a variety of
real-life situations, these transfers are not an option and other methods need to be explored.
This is commonly referred to as the one-sided matching market, house allocation or
assignment problem and it is one of the fundamental combinatorial optimization problems in
the branch of optimization or operations research in mathematics. It encompasses various
situations like assigning dormitory rooms to students, time slots to users of a common
machine, organ allocation markets, military posting, school choice, assigning graduates to
training positions, families to government owned housing and wedding table selections.

This paper will start by introducing the model used throughout the paper and the specific
assumptions and restrictions this model follows. Secondly the paper will define the three main
desirable properties: efficiency, strategyproofness and fairness we would like our mechanism
to have and establish that there is a mutual tradeoff between these three via some
impossibility results. It was also touch upon the notion of popularity among matchings.
Thirdly the paper will go into some popular mechanisms in the literature, namely the Random
Serial Dictator -, Probability Serial and Rank Efficient mechanism, illustrated by some
examples showcasing their degree of efficiency, strategyproofness and fairness. A heuristic
perspective will also be offered. After that it will briefly discuss hybrid mechanisms and
finish with a short conclusion summarizing what the paper has discussed.

2. Model

In this paper we consider the problem of allocating a finite set of n indivisible items to a finite
set of m agents. We can assume n is equal to m if we introduce dummy agents and dummy
items that have respectively no preferences for items and no worth for agents. Every agent has
to be assigned exactly one item and owns nothing initially, so we do not allow an outside
option or starting endowments. This could be represented as finding a maximum weight
perfect matching in a bipartite graph where the weights are unknown.

We do not allow for money transfers, which might seem controversial as some would say it
should be perfectly valid for students that are more well off to buy their way into their
preferred dormitories, but this is the reality in certain markets like organ trade. It is not the
subject of the paper to discuss the ethics behind this and we take it as a given.

We specifically focus on ordinal preferences as opposed to cardinal preferences, where each


agent has specific values specified for each outcome. We merely know which outcome is
preferred to which, but not by how much. There is a lower informational burden on the agent
and in some settings this is more simple and realistic. Each agent a has preferences over
the set of items. We restrict ourselves to the strict preference domain, so we do not allow
agents to be indifferent between items, although this can be substantially important for
welfare gains. (Featherstone, 2011) Agents also only care about the item allocated to them so
envy or competitiveness or similar behavioural concepts do not play a role. Based on these
preferences and the mechanism at hand, agents formulate a preference list that they will
report, henceforth called as their strategy. Preferences are also assumed to be transitive,
meaning that when a is preferred to b and b is preferred to c, a is preferred to c. If a strategy is
incomplete we allow ourselves to fill in the blanks as we please, making each strategy of size
n. We assume agents are risk-neutral.

A mechanism takes a profile of strategies as input and outputs an allocation, where a profile
of strategies is a set of individual strategies of all agents. A deterministic assignment is an
allocation where each agent receives exactly one object and each object is allocated to exactly
one agent. It is convenient to think of it as 0 1 matrix with rows indexed by agents and
columns indexed by objects. There is exactly one 1 in each column and row. A random
assignment is a probability distribution over deterministic assignments leading to a stochastic
matrix whose entry at row i and column j represents the probability agent i gets item j. The
allocation for agent i is then contained in the i-th row of this matrix and referred to as .
3. Desirable Properties

3 cornerstones that are the foundation of these mechanisms are efficiency, fairness and
strategyproofness. Efficiency says something about the quality of the solution the mechanism
gives us. It is self evident that everyone getting their least preferred item would be a poorer
assignment than everyone getting their first choice, but how do we formalize this? Secondly
2

most would agree that it is absolutely crucial that agents are incentivized to reveal their true
preferences, otherwise any statement about the quality of the assignment is based on
falsehoods. This property is referred to as strategyproofness and means that the mechanism is
not vulnerable to manipulation. This quality might however be overstated because in a
number of situations there are complications for agents to choose a strategy properly like
incomplete information. Thirdly we would like our mechanisms to be fair. Outside of
normative reasons this is necessary in order to better secure compliance with the mechanism.
Agents would otherwise have a greater incentive to forego a centralized mechanism altogether
and arrange it privately if so possible. An honest question would be, why not all? It turns out
that all these are mutually incompatible in some of their stronger concepts and naturally we
need a feasible mechanism so fourthly there will be some impossibility results illustrating
some of the tradeoffs mechanisms designers face. Lastly we will briefly discuss the notion of
popularity among matchings.

In traditional sense efficiency is about maximizing social welfare, typically done by summing
each agents utility individually. However this is a tad more complicated when dealing with
ordinal instead of cardinal preferences, since we do not have any actual values to build on.
Luckily we can use an equivalent procedure of Pareto improving; if the deterministic
assignment can be changed without any agent being worse off and at least one agent being
strictly better off; a subset of agents trade their allocated goods amongst themselves and end
up being better off. A deterministic assignment x Pareto dominates another deterministic
assignment x if with at least 1 strict, where is an agent in the set of
agents . A deterministic assignment is said to be Pareto-efficient if no Pareto improvement
exists. A random assignment is said to be ex-post efficient if it can be represented as a
probability distribution over Pareto efficient deterministic assignments.

There are however some stronger forms of efficiency. If we see our indivisible goods as
comprised of probability mass, one can in a similar way define Pareto improving by trading
probability shares, if the new assignment would weakly stochastically dominate the previous
assignment. We will refer to this as an ordinal improvement. A random assignment x

ordinally dominates another random assignment x if


, with at least 1 strict, where is an agent in the set of agents and i an item in the
set of items I. Similar as before a random assignment is ordinally efficient if no other random
assignment ordinally dominates it.
3

An even stronger concept is rank efficiency which is the idea that 2 agents getting their first
choice, 1 agent their second is better than 1 agent their first choice, 2 agents their second
choice. This is different from the Pareto concepts before because as a whole the agents might
be better off, but individually they might not be. Rank improvements will have some agents
be worse off in order to increase total welfare. To this end we will define the rank distribution
of allocation x to be: () 1{()} . () is the expected number
of agents who get their kth choice or better under assignment x. A random assignment x rank

dominates a random assignment x if () () = 1, , with at least 1 strict.


Rank efficiency implies ordinal efficiency which implies ex-post efficiency, the converse is
not necessarily true. A mechanism is said to be ex-post efficient (ordinally efficient, rank
efficient) if it only produces random assignments that are ex-post efficient (ordinally efficient,
rank efficient). (Featherstone 2011)

A mechanism is said to be strategyproof if for all possible preference profiles revealing your
true preferences is a dominant strategy for each agent, regardless of other agents strategies. A
mechanism is said to be weakly strategyproof if for all possible preference profiles no agent
can get a strictly stochastically dominating assignment by misreporting his preferences,
regardless of other agents strategies. Strategyproofness implies weak strategyproofness.
Strategyproofness has to work for all utility functions underlying the ordinal preferences,
whereas weak strategyproofness only has to work for some. (Nesterov, 2014) A mechanism is
said to be non-bossy if it is not possible to change your strategy and consequently change
some other agents allocation but not your own allocation. This is desirable because while we
do not allow for monetary transfers in our model, agents may feel induced to resort to bribery
when faced with a bossy mechanism in order to incentivize influential agents strategizing in a
way that is beneficial to them.

Strategyproofness might not be as crucial as appears. Let us consider an example where a


very poor mechanism would set an order of agents and have them in turn get their least
preferred item that is still left. Clearly everyone would have a large incentive to invert their
preference lists completely and if they would do as such, this mechanism would still be expost efficient. Efficiency results might be more shaded and unclear behind mechanisms that
are not strategyproof, but can still be there. If a mechanism always produces allocations which

are efficient relative to people's true preferences, it should not concern us that some
individuals have not reported the truth.

Another point is that agents might not be sophisticated enough to strategize. Bounded
rationality and limited information are factors that might make agents not too sure about how
to value something exactly. Think about it in the way that the strict preferences can border too
much on indifference. Add in the fact that all agents might feel this way and will have to
estimate how others prefer things as well in order to produce a sound strategy, keeping in
mind that others might strategize as well. The details about the mechanism used can also
deliberately be kept secret to some degree, which would add even more uncertainty to this
already tricky environment and the potential gains of misreporting might prove to be too
risky. Agents might however learn over time to strategize appropriately within this
environment. Nevertheless in a substantially transparent environment these concerns
evaporate so ultimately it is up to the situation at hand to determine how important
strategyproofness is.

As for fairness, a random assignment is envy-free if each agent prefers their allocation to that
of any other agent. A random assignment is weakly envy-free if no other agent strictly prefers
someone elses allocation to theirs. Envy-freeness has to work for all utility functions
underlying the ordinal preferences, whereas weak envy-freeness only has to work for some. A
random assignment satisfies equal treatment of equals if agents with identical preferences
receive identical allocations. Envy-freeness implies weak envy-freeness and equal treatment
of equals. A mechanism satisfies equal division lower bound if any outcome it induces
ordinally dominates the allocation where you would give each agent 1/n probability for each
item. The latter allocation tends to be used in situations where preferences are completely
unknown. (Katta & Sethuraman, 2005)

Unfortunately we cannot just take a mechanism that is envy free, strategyproof and rankefficient, because such a mechanism does not exist. Nesterov contributes some impossibility
results saying that there are no mechanisms that are strategyproof, envy-free and ex-post
efficient. It is also not possible to simultaneously have strategyproofness, weak envy-freeness
and ordinal efficiency. Lastly the set that contains mechanisms that are strategyproof,
ordinally efficient and satisfy equal division lower bound is the empty set (Nesterov, 2014) In
addition Featherstone shows that there is no mechanism that is rank-efficient and even weakly
5

envy-free or weakly strategyproof. (Featherstone, 2011) This shows that there is a tradeoff
between efficiency, fairness and strategyproofness and policy makers will have to decide what
is most important to them and implement the mechanisms that best suit their needs.

An alternative way of looking the way to allocate is what would majority voting lead to.
Namely we define (, ) = |{ , }|. And say a matching M is
more popular as another matching M if (, ) > ( , ), meaning more agents prefer
matching M to M. A matching is defined as popular if (, ) ( , ) for all M,
namely it is popular when it is more popular or equally popular than any other matching. A
matching being popular makes it relatively stable as only a minority of agents would want to
deviate from it. Unfortunately a popular matching does not need to exist as illustrated by the
following example. Suppose there are agents 1 to 4 with the following preferences over items
a to c, so for agent 1: a b c. We have matchings M1, M2 and M3, so for M1 agent 1
obtains item a, agent 2 gets item b and agent 3 gets item c.
Agent 1

Agent 2

Agent 3

M1

M2

M3

Now M2 is more popular than M1 because agent 2 and 3 outnumber agent 1 and are both
strictly better off under M2. M3 is more popular than M2 because agent 1 and 3 outnumber
agent 2 and are both strictly better off under M3. M1 is however more popular than M3
because agent 1 and 2 outnumber agent 3 and are both strictly better off under M1. Since the
more popular than property cycles we cannot find a matching that is popular. There exist
efficient algorithms to check if within some instance a popular matching exists and what
matching is then. (Abraham et al., 2007)

In an attempt to still use the notion of popularity and have it exist for every given instance,
McCutchen developed the unpopularity margin and unpopularity factor. Let (M, M) =
( , ) (, ), that is the difference between the number of agents that prefer M to
M and the number of agents that prefer M to M. Then the unpopularity margin of M is
defined as { (M, M)}. Let (M, M) = ( , ) / (, ), that is the ratio of the
number of agents that prefer M to M to the number of agents that prefer M to M. Then the
unpopularity factor of M is defined as { (M, M)}. Calculating these measures has
unfortunately been shown to NP-hard. (McCutchen, 2008) Now we have some solution
concepts that do not always exist or are very hard to compute.

Luckily Kavitha et al. introduce mixed matchings to which popular matchings can
straightforwardly be extended to. A mixed matching is a probability distribution over
matchings in the same spirit as mixed strategies are a probability distribution over pure
strategies. In the previous example the popular mixed matching is where M1, M2 and M3 are
all chosen with probability 1/3. They also prove that popular mixed matchings always exist,
can be computed efficiently and have a price of anarchy and stability of 2. (Kavitha et al.,
2009)

4. Mechanisms
If we take strategyproofness as our main foundation for a solution, we would need a
mechanism where agents are incentivized to report their true preferences. A way to do this
would be to fix an order of agents and let them pick an item if another agent has not already
taken it before them. Every agent would be able to report their true preferences. This is
commonly referred to as the serial dictator mechanism as we would have a series of dictators
picking exactly what item they would want that has not been picked by previous dictators. At
the end of the series, each agent will have chosen an item and would be allocated exactly that.
An example to illustrate:

Agent 1

Agent 2

Agent 3

Agent 4

If they would pick in the ordering (1, 2, 3, 4), 1 would simply choose his most preferred
choice as everything is still up for grabs and end up with a. 2 would like to pick a, but that has
been taken already so he settles for b. 3 has both of his most preferred items taken already so
he will get d. 4 would be left with c, his least preferred option. Note that aside from
completely strategyproof this mechanism is also ex-post efficient, since no earlier dictator
would want to swap his allocation with a later dictator, since they would have picked that
already if they would prefer that more. Agent 2, 3 and 4 have some reason to complain about
the fairness of all this however, since they would have their most preferred option if they
would happen to be first in the ordering. Clearly the ordering dictates the final allocation, so
to make it fairer we turn to the RSD mechanism.
Here we consider all possible n! orderings and pick one at random, where each ordering has
an equal chance of being picked. Applied to our example this would lead to the following
assignment, so agent 1 would have a 5/12 chance of getting item a, 1/12 chance of getting
item b, a 5/12 chance of getting c and a 1/12 chance of getting item d. Note that agent 2 who
has identical preferences to agent 1 has an identical allocation so this mechanism satisfies
equal treatment of equals. This is because each ordering is given an exactly equal probability
weight of being chosen. The appendix shows how the figure below is obtained, which is quite
involved and not that easy to compute. Aziz, Brandt and Brill have a paper discussing the
computational complexity and conclude it is #P-complete. (Aziz et al., 2013) Unfortunately
empirics show that agents do not always play dominant strategies and this mechanism is no
exception. Only 70.9% of agents reported truthfully under incomplete information in an
experimental study researching a RSD mechanism on on-campus housing (Y. Chen & T.
Snmez, 2002). However a remarkable 100% reported truthfully under complete information
in a follow-up experimental study on RSD. (Y. Chen & T. Snmez, 2003) On the efficiency
side RSD always returns an assignment with social welfare at least 1/3th of the optimal social
8

welfare. (Adamczyk & et al., 2014) RSD also asymptotically has an ordinal social welfare
factor, which measures the number of agents that are at least as happy as some unknown
arbitrary benchmark allocation, of 1/2. (Bhalgat et al. 2011) Lastly RSD has an approximation
ratio of (1/2 ). (Filos-Ratsikas et al. 2014)

Agent\Item

5/12

1/12

5/12

1/12

5/12

1/12

5/12

1/12

1/12

5/12

1/12

5/12

1/12

5/12

1/12

5/12

Bogomolnaia and Moulin proposed a more efficient solution named as the probability serial
mechanism. (Bogolmolnaia & Moulin, 2001) Namely we consider each of our indivisible
items as composed of probability mass that agents can consume at equal unit speed. Agents
would consume whatever item they would prefer most at that point, provided it has not been
consumed yet. After all items have been consumed and each agent has gathered exactly 1 in
probability mass at t = 1, the mechanism randomly allocates items with exactly that
corresponding probability the agent in question has consumed of that particular item. If agent
1 has consumed half of item bs probability mass, he would be allocated that item with
probability 1/2. If we apply this to our previous example we would obtain that both agent 1
and 2 would start consuming a and both agent 3 and 4 would start consuming b. Since 2
agents are eating both these items a and b would be consumed fully at t = 1/2. Since b is now
already consumed 1 and 2 will start consuming c instead and by similar reasoning 3 and 4 will
start consuming d. Again 2 agents are consuming c and d so they would also be finished after
1/2 time units leading us to t = 1 and all items are consumed, giving us the following
allocation:

Agent\Item

1/2

1/2

1/2

1/2

1/2

1/2

1/2

1/2

Note that this allocation ordinally dominates the RSD allocation. In the RSD solution agents 1
and 2 could exchange their probability shares of b and d with agents 3 and 4 for respectively
probability shares of a and c, making every agent better off regardless of their exact
preferences. This mechanism is actually ordinally efficient, non-bossy (Hugh-Jones et al.,
2014) and envy-free however it is only weakly strategyproof, which is quite important
because the other qualities depend on the fact that these are the actual true preference lists. An
example showing that it is not strategyproof is shown in the appendix. This is not a concern
for especially large markets where the number of copies of an item is sufficiently large as
truth-telling is here an exact dominant strategy (Kojima & Manea, 2006). In fact PS and RSD
are asymptotically equivalent if the number of copies of an item approaches infinity, meaning
that PS is strategyproof and RSD is not less efficient in this setting. (Che & Kojima, 2010)
However in smaller markets this can be devastating as in certain Nash equilibria it can
completely drop envy-freeness and ordinal efficiency. (Ekici & Kesten, 2012) In PSs
defense, it is NP-hard to compute an expected utility better response, let alone a sequence of
better responses culminating in a Nash equilibrium, with possibly these worse attributes than
under a truth telling equilibrium. (Aziz et al., 2015) An experimental study comparing RSD
and PSs strategyproofness found some interesting results that indicate behavioural anomalies
should also not be sold short. (Hugh-Jones et al., 2014) This shows once again that there is no
one-size-fits-all mechanism and it ultimately depends on the specific market characteristics.
If we take strategyproofness for granted now and primarily consider efficiency. A heuristic
outlook would suggest that if no agent is competing for a certain item with you in the
corresponding rank or a higher rank, then that item should be allocated to you. Consider the
following preference profile:
10

Agent 1

Agent 2

Agent 3

Agent 4

Agent 5

In the first column it is clear that Agent 1 and 3 contest item d and agent 4 and 5 contest item
b, however agent 2 prefers an entirely uncontested item so we will allocate e to him and delete
it from the table
Agent 1

Agent 3

Agent 4

Agent 5

Since the first column is still entirely contested we move on and find that agent 3 prefers next
an item that he is contesting with agent 4 and 5 in round 1. Agent 4 however finds an
uncontested element and gets allocated c.
Agent 1

Agent 3

Agent 5

Now b is up for grabs for agent 5, since he is no longer contesting it with agent 4.

11

Agent 1

Agent 3

Identical preferences left so we will allocate these randomly. We would end up with a
reasonable allocation of 3 first choices, 1 second choice and 1 third choice. This mechanism
satisfies equal treatment of equals, but is unfortunately not strategyproof. It would be in the
agents best interest to go for any highly preferred item that is uncontested and report
truthfully, however once it is contested he should keep his preferences similar to others so that
the one he is contesting with gets stuck with a less preferred item first. This is illustrated in
the appendix.
A similar focus on quality is followed by Featherstone who introduces rank-efficient
mechanisms. Consider the following example along with 2 allocations, X and Y, and the
solutions RSD and PS provide:
Agent 1

Agent 2

Agent 3

X:
Agent\Chance

12

Y:
Agent\Chance

Agent\Chance

1/2

1/2

1/2

1/6

1/3

5/6

1/6

Agent\Chance

1/2

1/2

1/2

1/4

1/4

3/4

1/4

RSD:

PS:

13

Giving us the following rank distributions:


(1)

(2)

(3)

RSD

1.75

2.75

PS

1.83

2.67

Y is clearly the superior allocation efficiency wise as it has the largest rank distribution and is
in this case uniquely rank-efficient. Rank distributions can be calculated by assigning some
cardinal value to each outcome that respects the original ordinal preferences and maximize
total utility. The resulting allocation will be rank efficient. Unfortunately this mechanism is
not even weakly strategyproof or weakly envy free. The non-strategyproofness might be
negligible in a low information environment though. The appendix will show an example
where misreporting pays. Solving the below linear program will give you the rank efficient
distribution, where () is some sequence that gives the value the mechanism places on ith choice allocations. () is a positive strictly decreasing sequence of size n. The objective
maximizes the value and the first constraint says that each item should be matched with up
to 1 in probability mass. The second says that each agent should be assigned up to 1 in
probability mass. Lastly probabilities have to be positive (Featherstone, 2011)
()

. . 1,

1,

0,

14

Each of these mechanism has their advantages and disadvantages, however when policy
makers are faced with making a decision between these, they have to so far think in somewhat
extreme terms in exactly what quality they would find important. Fortunately Mennle and
Seuken have formalized a framework for hybrid mechanisms; a probability distribution over
mechanisms. They develop a hybrid between the RSD and PS and define URBI(r)-partial
strategyproofness and imperfect dominance which are in-between measures of respectively
weak strategyproofness and strategyproofness and Pareto dominance and ordinal dominance.
In this way they can establish a parametric tradeoff between strategyproofness and efficiency
while retaining the fairness properties. A policy maker that would feel more inclined towards
efficiency could just add a higher probability to selecting the PS, and in a similar manner a
mechanism designer that attaches more value to strategyproofness would add a higher
probability to the RSD. (Mennle & Seuken, 2014) Featherstone also mentions in his paper the
possibility that the non-strategyproofness of his rank-efficient mechanism can be offset by
implementing the RSD instead with some probability. This would also undermine the fact that
normally agents would learn to strategize within a mechanism. (Featherstone, 2011)
5. Conclusion
This paper has given an extensive review on the assignment problem when we are considering
ordinal instead of cardinal preferences. It started by a thorough explanation about the model
and continued with an extensive look at desirable properties we would like to acquire. It
covered efficiency, fairness and strategyproofness and discussed that there was a mutual
tradeoff between all of these. It briefly mentioned the notion of popularity as an instrument as
well. After that it showcased the most common mechanisms found in the literature, namely
the RSD, PS and rank efficient mechanisms. RSD is strategyproof, ex-post efficient and
weakly envy-free, PS is weakly strategyproof, ordinally efficient and envy-free and the rank
efficient mechanism is rank-efficient, but not weakly strategyproof or weakly envy-free. It
also provided a heuristic perspective and offered a look at hybrid mechanisms that are likely
key to optimally regulating these tradeoffs. Future research could focus on putting an empiric
foundation behind these hybrid mechanisms in order to see if the hybrid properties hold up.

15

References

Abraham D. J., Irving R. W., Kavitha T. & Mehlhorn K. (2007) Popular matchings.
Society for Industrial and Applied Mathematic. Retrieved from: https://people.mpiinf.mpg.de/~mehlhorn/ftp/PopularMatchingsJournal.pdf

Adamczyk M., Sankowski P. & Zhang Q. (2014) Efficiency of truthful and symmetric
mechanisms in one-sided matching. In Lavi R. (Ed.), Algorithmic Game Theory (pp. 13-24).
Springer Berlin Heidelberg. Retrieved from: http://arxiv.org/pdf/1407.3957v2.pdf

Aziz H., Brandt F. & Brill M. (2013) The computational complexity of random serial
dictatorship. Economics Letters. Retrieved from: http://arxiv.org/pdf/1304.3169v2.pdf

Aziz H., Gaspers S., Mackenzie S., Mattei N., Narodytska N. & Walsh T. (2015)
Manipulating the probabilistic serial rule. Retrieved from:
http://arxiv.org/pdf/1501.06626v1.pdf

Bhalgat A., Chakrabarty D. & Khanna S. (2011) Social welfare in one-sided matching
markets without money. In L. A. Goldberg, K. Jansen, R. Ravi & J. D. P. Rolim (Eds.),
Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques
(pp. 87-98). Springer Berlin Heidelberg. Retrieved from: http://arxiv.org/pdf/1104.2964v1.pdf

Bogomolnaia A. & Moulin H. (2001) A new solution to the random assignment


problem. Journal of Economic Theory, 100, 295-328. DOI 10.1006_jeth.2000.2710

Che Y.-K. & Kojima F. (2010), Asymptotic equivalence of probabilistic serial and
random priority mechanisms, Econometrica 78:5 1625-1672. Retrieved from:
http://www.columbia.edu/~yc2271/files/publications/Che-Kojima.pdf

Chen Y. & Snmez T. (2002) Improving efficiency of on-campus housing: an


experimental study. American Economic Review. Retrieved from:
http://yanchen.people.si.umich.edu/published/Chen_Sonmez_AER_2002.pdf

16

Chen Y. & Snmez T. (2003) An experimental study of house allocation mechanisms.


Economic Letters. Retrieved from:
http://yanchen.people.si.umich.edu/published/Chen_Sonmez_EL_2004.pdf

Ekici O. & Kesten O. (2012), An equilibrium analysis of the probabilistic serial


mechanism. International Journal of Game Theory. Retrieved from:
https://faculty.ozyegin.edu.tr/ekici/files/2014/08/Equilibrium.pdf

Featherstone C. R. (2011) A rank-based refinement of ordinal efficiency and a new


(but familiar) class of ordinal assignment mechanisms. Retrieved from:
http://assets.wharton.upenn.edu/~claytonf/JobMarketPaper3.pdf

Filos-Ratsikas A., Stiil Frederiksen S. K. & Zhang J. (2014) Social welfare in onesided matchings: random priority and beyond. In Lavi R. (Ed.), Algorithmic Game Theory
(pp. 1-12). Springer Berlin Heidelberg. Retrieved from: http://arxiv.org/pdf/1403.1508v2.pdf

Hugh-Jones D., Kurino M. & Vanberg C. (2014) An experimental study on the


incentives of the probabilistic serial mechanism. Discussion Papers, Research Unit: Market
Behavior. Retrieved from: http://www.uni-heidelberg.de/md/awi/studium/hughjones_et_al_2014.pdf

Katta A. & Sethuraman J. (2005) A solution to the random assignment problem on the
full preference domain. Journal of Economic Theory. Retrieved from:
http://www.columbia.edu/~js1353/pubs/ks-fair.pdf

Kavitha T. Mestre J. & Nasre M. (2009) Popular mixed matchings. In S. Albers, A.


Marchetti-Spaccamela, Y. Matias, S. Nikoletseas & W. Thomas (Eds.), Automata, Languages
and Programming (pp. 574-584). Springer Berlin Heidelberg. Retrieved from:
http://sydney.edu.au/engineering/it/~mestre/papers/mixed.pdf

Kojima F. & Manea M. (2006), Incentives in the probabilistic serial mechanism.


Journal of Economic Theory. Retrieved from: http://economics.mit.edu/files/5985

17

McCutchen R. M. (2008) The least-unpopularity-factor and least-unpopularity-margin


criteria for matching problems with one-sided preferences. In E. S. Laber, C. Bornstein, L. T.
Nogueira & L. Faria (Eds.), Latin 2008: Theoretical Informatica. (pp.593-604). Springer
Berlin Heidelberg. Retrieved from: https://mattmccutchen.net/lumc/lumc.pdf

Mennle T. & Seuken S. (2014) Hybrid Mechanisms: trading off efficiency and
strategyproofness in one-sided matching. Retrieved from: http://arxiv.org/pdf/1303.2558.pdf

Nesterov A. (2014) Fairness and efficiency in a random assignment: three


impossibility results. Retrieved from: https://bdpems.wiwi.huberlin.de/portal/sites/default/files/WP%202014-06%20Fairness_and_Efficiency.pdf

18

Appendix

A. Computation RSD solution


Ordering
1234
1243
1324
1342
1423
1432
2134
2143
3124
3142
4123
4132
2314
2413
3214
3412
4213
4312
2341
2431
3241
3421
4231
4321

1
a
a
a
a
a
a
b
b
a
a
a
a
c
c
c
c
c
c
c
c
c
d
c
d

2
b
b
c
c
c
c
a
a
c
c
c
c
a
a
a
d
a
d
a
a
a
c
a
c

3
d
c
b
b
d
d
d
c
b
b
d
d
b
d
b
b
d
a
b
d
b
b
d
a

4
c
d
d
d
b
b
c
d
d
d
b
b
d
b
d
a
b
b
d
b
d
a
b
b

For example the first entry corresponds to agent 1 having a show up in 10 out of 24
permutations.

19

B. PS is not strategyproof
Agent 1

Agent 2

Agent 3

Agent\Item

1/2

1/4

1/4

1/2

1/2

3/4

1/4

However if agent 3 misreported his true preferences, b a c, as a b c.


Agent\Item

1/3

1/2

1/6

1/3

2/3

1/3

1/2

1/6

Which would benefit him for certain utility functions namely if


3
1
1
1
1
3 ()+ 3 () < 3 ()+ 3 () + 3 ()
4
4
3
2
6
Which would be true for e.g. 3 () = 9, 3 () = 10 and 3 () = 0

20

C. Heuristic mechanism is not strategyproof


If agent 4 would now report item d as his second preferred choice and c as his fourth choice.
Agent 1

Agent 2

Agent 3

Agent 4

Agent 5

Agent 2 has the unique element in the first column so he obtains item e
Agent 1

Agent 3

Agent 4

Agent 5

Both the first and second column are now fully contested so we move on to the third column
where agent 5 now has to settle for item c.
Agent 1

Agent 3

Agent 4

And agent 4 lost his competition and obtains item b which he preferred to item c, which he
would obtain under reporting truthfully.
21

D. Rank efficient mechanism is not strategyproof


Agent 1

Agent 2

Agent 3

Agent 4

The unique rank-efficient outcome is given in red. Everyone gets their top item except agent
1, however if agent 1 were to report his preferences as.
Agent 1
Agent 2
Agent 3
Agent 4

b
b
c
d

c
c
d
b

d
e
e
-

e
-

Agent 1 would get b, which he strictly prefers to e, which he would get under truth- telling.
1:
2:
3:
4:

22