Workshop on Algorithmic Game Theory

DIMAP, University of Warwick, UK

March 25 - 28, 2007


ABSTRACTS, SLIDES, and ADDITIONAL MATERIALS


Yossi Azar (Tel Aviv University, Israel and Microsoft Research, Redmond, USA)

(Almost) Optimal Coordination Mechanisms for Unrelated Machine Scheduling

We investigate the influence of different algorithmic choices on the approximation ratio in selfish scheduling. Our goal is to design local policies that minimize the inefficiency of resulting equilibria. In particular, we design optimal coordination mechanisms for unrelated machine scheduling, and improve the known approximation ratio from m to log m where m is the number of machines.

Joint work with Kamal Jain and Vahab Mirrokni.

Slides


Richard Cole (New York University, USA)

Why Might Markets be Self-converging? A Local, Quickly Convergent Tatonnement Algorithm

To answer this question, we seek a price update procedure with the following properties:

  1. It uses only "local" information
    The price update for good i is a function only of the current and past values of its price and excess demand
  2. It is simple
    This is subjective, of course
  3. It is asynchronous
    That is, price updates do not have to be simultaneous
  4. It converges quickly
Specifically, we give two such tatonnement algorithms which operate in a subset of markets obeying the Weak Gross Substitutes property, including all markets with Cobb-Douglas utilities, and those with CES utilities having rho in the range (0,1), but bounded away from 1. The first algorithm requires and uses an (implicit) upper bound on the elasticity of demand; the second algorithm, loosely speaking, discovers this upper bound.

This is joint work with Lisa Fleischer (Dartmouth).

Slides


Vincent Conitzer (Duke University, USA)

Preprocessing Techniques for Computing Nash Equilibria

When computing a Nash equilibrium, it can be helpful to see if the game can be turned into a smaller game before unleashing an equilibrium-finding algorithm on it. For example, if a strategy is strictly dominated, it can be removed from the game. I will present two such preprocessing techniques. The first technique consists of a parameterized definition of when a strategy can be eliminated. At one extreme setting of the parameters, this definition corresponds to strict dominance; at the other extreme, it corresponds to whether the strategy is in the support of some Nash equilibrium. For parameter settings close to dominance, eliminability can be computed efficiently. The second technique looks for a particular structure in the game. Given this structure, a subcomponent of the game can be solved recursively, and its solution can be used to reduce the original game. If the structure exists in the game, then it can be found in polynomial time using an algorithm based on Horn satisfiability.

This talk covers joint work with Tuomas Sandholm.

Slides

Relevant material: paper 1 and paper 2


Shahar Dobzinski (Hebrew University, Israel)

Truthful Mechanisms for Combinatorial Auctions with Subadditive Bidders

We study truthful mechanisms for combinatorial auctions when the bidders' valuations are subadditive. We first present a deterministic mechanism that guarantees an O(sqrt(m)) approximation. The algorithm finds the best allocation in a preselected range, and then simply uses VCG payments to ensure truthfulness. We will then sketch parts of the proof that this ratio is essentially the best possible using VCG payments.

In the second part of the talk we will show that randomization can greatly help us in this setting, by presenting a randomized mechanism that obtains an almost logarithmic approximation ratio.

Based on my joint works with Noam Nisan and Michael Schapira.

Slides


Kousha Etessami (University of Edinburgh, UK)

On the Complexity of Approximating Exact Fixed Points: Nash Equilibria, Stochastic Games, and Recursive Markov Chains

I will discuss some very recent results obtained with Mihalis Yannakakis on the topics of the title.

Slides


Uri Feige (Weizmann Institute, Israel and Microsoft Research, Redmond, USA)

On Allocations that Maximize Fairness

We consider a problem known as the restricted assignment version of the Santa Claus problem. There are n items (presents) of various values and m kids. Every kid is interested only in some of the items and would refuse to receive other items as presents. Santa Claus has to distribute the presents among the kids in a way that maximizes a certain notion of fairness, namely, maximizes the minimum of the sum of values of items given to any kid.

Bansal and Sviridenko describe a linear programming relaxation for this problem, and present a rounding technique that recovers an allocation of value at least Ω(logloglog m/loglog m) of the optimum. We show that the value of this LP relaxation in fact approximates the optimum value to within a constant factor. Our proof is not constructive and does not by itself provide an efficient algorithm for finding an allocation that is within constant factors of optimal.

Slides


Amos Fiat (Tel Aviv University, Israel)

Efficient Contention Resolution Protocols for Selfish Agents

We seek to understand behavior of selfish agents accessing a broadcast channel. In particular, we consider the natural agent utility where costs are proportional to delay. Access to the channel is modeled as a game in extensive form with simultaneous play. Standard protocols such as Aloha are vulnerable to manipulation by selfish agents. We analyze the symmetric equilibrium strategy in such a setting and show that the appropriate transmission probabilities are O(1/\sqrt{n}), when n agents are competing. This implies exponentially long delays. We give a very simple protocol for the agents that is in Nash equilibrium and - other than with exponentially negligible probability - all n agents will successfully transmit within O(n) steps.

Joint work with Yishay Mansour and Uri Nadav.

Slides


Jason D. Hartline (Microsoft Research, Mountain View, USA)

The Simple Mathematics of Optimal Auctions

Balcan et al. (2005) show that the framework of the random sampling auction of Goldberg et al. (2001) gives a generic reduction in the context of unlimited supply profit maximization from optimal mechanism design (e.g., Goldberg et al. (2001)) to optimal algorithm design (e.g., Guruswami et al. (2005)). One major open question left is in achieving this level of generality for limited supply settings. For the case that the seller has a large but limited supply of each commodity, we answer this question by giving a non-trivial adaptation of the random sampling auction framework to the limited supply setting, and prove bounds analogous to those of Balcan et al. (2005) for both the objectives of profit maximization and welfare maximization that apply even when agents have general (e.g., non-quasi-linear) preferences. These results generalize the prior work on random sampling limited supply auctions of Borgs et al. (2005) which consider the special case where agents have linear valuations and budgets.

Slides


Kamal Jain (Microsoft Research, Redmond, USA)

On Stability of the Core

This talk introduces a strengthening of the notion of a stable core and characterizes it in terms of Kikuta and Shapley's extendability condition. An implication of this is that stability of the core may not admit succinct characterization. For this reason we also investigate the decidability of verifying stability of the core.

Slides


Frank Kelly (Cambridge University, UK)

Congestion Control Algorithms

This talk will review various attempts to provide a theoretical underpinning for distributed congestion control algorithms.

Slides


Elias Koutsoupias (University of Athens, Greece)

Lower Bounds of Mechanisms for Scheduling Unrelated Machines

The mechanism design problem of scheduling tasks on n unrelated machines, in which the machines are the players of the mechanism, was proposed and studied in the seminal paper of Nisan and Ronen about ten years ago. It was shown there that the approximation ratio of mechanisms is between 2 and n. In this talk, I will present some recent results which improve the lower bound.

Slides


Piotr Krysta (University of Liverpool, UK)

Approximability of Pricing Problems

Suppose a seller wants to sell a finite collection of goods which can be available in limited or unlimited supply. We have a set of potential customers and each customer specifies a single subset of goods she is interested in and the maximum price she is willing to pay for that subset. If the goods are the edges of a graph and customers are requesting to purchase paths in this graph, then we can think of the problem as pricing computer network connections or transportation links. We call such customers single-minded as they are interested in whole single subset of goods. The problem is to compute the prices for goods so as to maximize the overall revenue of the seller.

In another setting we will consider, called unit-demand, each customer also declares a single subset of goods together with non-zero budgets for each single good, and a ranking of all the goods the customer is interested in. Once prices are fixed, each customer chooses to buy one of the goods she can afford based on some predefined selection rule, such as min-buying, max-buying, and rank-buying. Again, the objective is to find the prices of goods to maximize the revenue of the seller.

In this talk we will consider the approximability of such problems, and will discuss both approximation algorithms and non-approximability results for some variants of these problems.

This is joint work with Patrick Briest.

Slides


Stefano Leonardi (Universita' di Roma "La Sapienza", Roma, Italy)

Efficient Cost Sharing Mechanisms for Prize-collecting Steiner Forest

This talk will present new results on a game-theoretic version of the prize-collecting Steiner forest problem (PCSF). In PCSF we are given an undirected graph with non negative edge-costs, k terminal pairs {(si,ti)} and penalties {pi}. A feasible solution consists of a forest F and a subset Q of terminal pairs such that each (si,ti) is either connected in the forest or it belongs to Q. The objective is to compute a feasible solution that minimizes the cost of F and the sum of the penalties of the pairs in Q.

We present a simple 3-budget-balanced and group-strategyproof mechanism for the above problem. This also provides the first natural primal-dual 3-approximation algorithm for PCSF. We also show that our mechanism computes client sets whose social cost is at most O(log2k) times the minimum social cost of any player set. This matches a lower-bound that was recently given by Roughgarden and Sundararajan (STOC'06).

Joint work with Anupam Gupta, Jochen Koenemann, R. Ravi and Guido Schaefer.

Slides


Burkhard Monien (University of Paderborn, Germany)

The Power of Two-Prices: Beyond Cross-Monotonicity

A cost-sharing mechanism is budget-balanced (BB) if it guarantees that the service provider recovers the cost incurred by servicing the selected set of agents; it is no-free-riders (NFR) if no agent gets the service for free; it is group-strategyproof (GSP) if no coalition of agents can improve the utility of one of its members. When can a cost-sharing mechanism be simultaneously BB, NFR and GSP? A variant of this general question was already investigated by Moulin (1999), who showed that for submodular cost-functions, cross-monotonicity guarantees BB and GSP. In this work, we consider arbitrary (not necessarily submodular) symmetric cost functions, which only depend on the number of serviced agents. Furthermore, we add NFR to the list of requirements. We obtain the following results:

Slides


Seffi Naor (Microsoft Research and Technion, Haifa, Israel)

Best Response Dynamics in Multicast Cost Sharing

A multicast game with selfish non-cooperative players is considered. There is a special source node and each player is interested in connecting to the source by making a routing decision that minimizes its payment. The mutual influence of the players is determined by a cost sharing mechanism, which we assume to evenly split the cost of an edge among the players using it. We focus on the price of anarchy of a Nash equilibrium resulting from the best-response dynamics of a game course, where the players join the game sequentially and play till equilibrium is reached. For a game with n players, a polylogarithmic upper bound on the price of anarchy and a logarithmic lower bound are established. Extensions to a fractional model are also discussed.

Slides


Noam Nisan (Hebrew University, Israel)

Approximation Mechanisms and Characterization of Implementable Social Choice Rules

The emerging field of Algorithmic Mechanism Design studies strategic implementations of social-choice functions that arise in computational settings - most importantly, various resource allocation rules. The clash between computational constraints and incentive constraints is at the heart of this field. This happens whenever one wishes to implement a computationally-hard social choice function (allocation rule). In such cases, approximations or heuristics are computationally required, but it is not at all clear whether these can be strategically implemented.

This talk will demonstrate many of the issues involved by looking in depth at a representative problem: multi-unit auctions.

The talk will have the flavor of a survey and is based on my previous joint work with Amir Ronen, Ilya Segal, Ahuva Mu'alem, Ron Lavi, and Shahar Dobzinski.

Slides


Christos H. Papadimitriou (University of California at Berkeley, USA)

Equilibria and Complexity: What now?

The proof that finding a Nash equilibrium is PPAD-complete, even for two players, poses some interesting questions: What does this result imply about the usefulness of the concept? Which alternative equilibrium concepts should be scrutinized now? And what about approximate Nash equilibria and efficient algorithms for special cases? In this talk we shall discuss these questions, focusing on recent results.

Slides


David M. Pennock (Yahoo! Research, New York, USA)

Computational Aspects of Prediction Markets

A prediction market is a financial market designed to aggregate knowledge and opinions about the likelihood of future events. I will discuss some of the computational aspects of these markets. After a brief introduction to prediction markets, I will cover:

  1. the computational complexity of matching orders expressed as Boolean functions of binary events;
  2. the complexity of matching orders expressed as properties of permutations of objects; and
  3. the design of automated market makers for prediction markets, including market scoring rules and dynamic parimutuel markets.
I will conclude with a discussion of open problems and directions for future research.

Slides


Tim Roughgarden (Stanford University, USA)

Quantifying Inefficiency in Mechanism and Protocol Design

We use the metric of worst-case approximate economic efficiency to inform the design of cost-sharing mechanisms and protocols. We consider both private-value settings (truthful cost-sharing mechanisms) and full-information settings (distributed network cost-sharing protocols).

Includes joint work with Shuchi Chawla, Ho-Lin Chen, Aranyak Mehta, Mukund Sundararajan, and Gregory Valiant.

Slides


Amir Ronen (Technion, Haifa, Israel)

The Local and Global Price of Anarchy of Graphical Games

This work initiates a study of connections between local and global properties of graphical games. Specifically, we introduce a concept of local price of anarchy that quantifies how well subsets of agents respond to their environments. We then show several methods of bounding the global price of anarchy of a game in terms of the local price of anarchy. All our bounds are essentially tight.

Joint work with Oren Ben-Zwi.

Slides


Amin Saberi (Stanford University, USA)

Approximation Algorithms for Max-Min Fair Allocation of Indivisible Goods

In this talk, I will present an approximation algorithm for max-min fair allocation of indivisible goods where the goal is to allocate a set of goods to $k$ people with linear utilities so that the least happy person is as happy as possible. The approximation ratio of our algorithm is O( \sqrt{k} log3 k). As a part of our algorithm, we design an iterative method for rounding a fractional matching on a tree which might be of independent interest.

Joint work with Arash Asadpour.

Slides


Paul Spirakis (University of Patras, Greece, and CTI Patras, Greece)

Well-Supported Approximate Nash Equilibria

We discuss here recent progress on a strong notion of approximate Nash Equilibria (NE), namely the well-supported ones. The examination focuses on bimatrix games. In such plays, each player gets almost maximum profit on each pure strategy of her support. Such strategic profiles imply also the standard approximation with the same quality , but the reverse is not at all clear. Till very recently , no efficient method was known for the provision of such a strong approximation with quality better than the obvious. We present here very recent results that consist of several polynomial time algorithms to compute well supported NE , with approximation quality much better than the obvious . In particular , for games with zero-one entries we give an algorithm which achieves a 50 per cent (at most) loss w.r.t. the NE profit in each pure strategy in the support. We extend this to games of entries in the interval [0,1] and the approximation constant becomes about 0.622. For sparse 0/1 games we achieve "almost Nash" approximations (i.e., the approximation loss goes to zero as the number of pure strategies increases). We also show that games with i.i.d. random entries admit an almost Nash such approximation that is trivial to construct.

Slides


Bernhard von Stengel (London School of Economics, United Kingdom)

Finding all Nash Equilibria of a Bimatrix Game

A bimatrix game is a basic model in non-cooperative game theory. Its central solution concept is the Nash equilibrium. For bimatrix games, Nash equilibria are best understood geometrically. In this talk we consider the problem of finding all Nash equilibria of a bimatrix game. We describe state-of-the-art algorithms, which are based on linear programming and polyhedral computation, and report some results of computational experiments.

Joint work with David Avis, Gabriel Rosenberg, and Rahul Savani.

Slides


Éva Tardos (Cornell University, USA)

Pricing Games in Networks

Games where agents interact through a network have played an important role in Algorithmic Game Theory. In this talk we will study games profit maximizing agents interact in a network. We will consider models where agents compete for users via prices, and aim to maximize profit. We study the extent to which competition between service providers hurts the overall social utility of the system. One setting is based on service providers competing for users via prices and quality of service. In the other setting buyers and sellers trade through intermediaries offering different prices. In the case of service providers price setting agents can significantly hurt the overall utility, while in case of our market model, price setting agents result in socially optimal trade.

The talk is based on joint work with Larry Blume, David Easley, Ara Hayrapetyan, Jon Kleinberg and Tom Wexler.

Slides


Vijay V. Vazirani (Georgia Institute of Technology, USA)

Markets and the Primal-Dual Paradigm

Irwing Fisher, in a Ph.D. thesis submitted to Yale University in 1891, defined a fundamental market model. A remarkable convex program, given by Eisenberg and Gale in 1959, captures, as its optimal solutions, equilibrium allocations for the linear utilities case of this market. In recent joint work with Devanur, Papadimitriou and Saberi, we gave a combinatorial, polynomial time algorithm for computing equilibrium allocations, and hence optimal solutions to the Eisenberg-Gale program, for this case.

Our algorithm uses the classical primal-dual paradigm - not in its usual setting of LP-duality theory but in the enhanced setting of KKT conditions and nonlinear convex programs. In this talk, I will describe the difficulties raised by this new setting and show how our algorithm circumvents them. I will also present a generalization to spending constraint utilities and show how they can be useful in Google's AdWords market. Finally, I will allude to several basic algorithmic questions that arise from these two works.

Slides

Relevant material: paper 1 and paper 2


Adrian Vetta (McGill University, Canada)

Finding Nash Equilibria in Certain Classes of 2-Player Game

We consider the question of finding Nash equilibria in 2-player games. We show that for some types of payoff matrix (e.g. 0-1 matrices with certain forbidden structures, random matrices) this task appears easier than in general.

Slides


Berthold Vöcking (RWTH Aachen, Germany)

Inapproximability of Congestion Games

A natural and convincing notion of approximation for equilibria in congestion games assumes that agents are ambivalent between strategies whose delay differs by less than a factor of λ, for some λ > 1. In this talk, we show that computing an λ-approximate equilibrium is PLS-complete, for any polynomial time computable λ > 1. Thus finding an approximate Nash equilibrium is as hard as finding an exact Nash equilibrium or solving any other problem in PLS. To our knowledge this is the first inapproximability result with respect to problems in PLS.

Slides


Uri Zwick (Tel Aviv University, Israel)

Simple Stochastic Games, Mean Payoff Games and Parity Games

The talk will summarize the relations between these games and will sketch the best algorithms currently known for their solution. For simple stochastic games and mean payoff games, these are randomized subexponential algorithms. For parity games a deterministic subexponential algorithm is also known. The existence of polynomial time algorithms for the solution of these games is a major open problem.

Slides