Cost Sharing over Combinatorial Domains - Georgios Birmpas (Oxford University)
We study mechanism design for combinatorial cost sharing. Imagine that multiple items or services are available to be shared among a set of interested agents. The outcome of a mechanism in this setting consists of an assignment, determining for each item the set of players who are granted service, together with respective payments. Although there are several works studying specialized versions of such problems, there has been almost no progress for general combinatorial cost sharing domains until recently [DobzinskiO17]. The main goal of our work is to further understand this interplay in terms of budget balance and social cost approximation. Towards this, we provide a refinement of cross-monotonicity (trace-monotonicity) that is applicable to iterative mechanisms. The trace here refers to the order in which players become finalized. On top of this, we also provide two parameterizations of cost functions which capture the behavior of their average cost-shares. Based on our trace-monotonicity property, we design a scheme of ascending cost sharing mechanisms which is applicable to the combinatorial cost sharing setting with symmetric submodular valuations. Using our first cost function we identify conditions under which our mechanism is weakly group-strategyproof, O(1)-budget-balanced and O(logn)-approximate with respect to the social cost. Finally, we consider general valuation functions and exploit our second parameterization to derive a more fine-grained analysis of the Sequential Mechanism introduced by Moulin. This mechanism is budget balanced by construction, but in general only guarantees a poor social cost approximation of n. We identify conditions under which the mechanism achieves improved social cost approximation guarantees.
Two quantitative studies concerning voting systems - Allan Borodin (University of Toronto)
There is a current societal interest in social choice theory (e.g. voting systems) motivated at least in part by the divisiveness apparent in many countries. Can computer science and AI bring any insight into some of the more prominent issues? More specifically, recent elections in the US and Canada (and elsewhere) have brought to prominence a couple of issues, Gerrymamdering and the role of primaries in voting. (Voting goes beyond political elections but political elections are the most newsworthy.) As one can imagine, there are complex modeling and computational issues when dealing with large social and political systems.
We have some preliminary but we think interesting insights into gerrymandering (strategic design of electoral districts) and primary systems. I will briefly talk about a work at IJCAI 2018 on gerrymandering and how the ``power of gerrymadering'' relates to the degree of urbanization. I will mainly talk about the issue of primaries vs direct elections as our work here is a blend of both theory and experimental work. Here is the basic question: What is the impact on the ``quality'' of our chosen leaders by having primaries where each party has its own election to choose their candidate for the general election? Does this tend to result in more extreme candidates? In a paper at AAAI 2019 paper on primaries, we conduct the first quantitative study of primary vs direct elections.
Graph Clustering with noisy queries and motifs - Charalampos Tsourakakis (Boston University and Harvard University)
Graph clustering is a fundamental problem in graph mining with important applications ranging from social networks analysis and entity resolution to computer vision and semi-supervised learning. In this talk I will discuss two novel variations of graph clustering. In the first part of the talk I will discuss the following clustering problem: there exist two latent clusters, and we are allowed to query any pair of nodes whether they belong to the same cluster or not, but the answer to the query is corrupted with some probability less than 1/2. Can we recover the clusters with high probability and what is the minimum number of queries needed? I will present two state-of-the-art algorithms and a closely related application on predicting signs in online social networks. Our results improve recent results by Mazumdar and Saha [NIPS’17]. In the second part of the talk, I will discuss how to exploit motifs to uncover communities. Specifically, I will introduce the concept of motif-expanders that serves as the basis for motif-based clustering. I will present efficient algorithms and a well-performing heuristic that outperforms some of the most popular graph clustering tools. I will conclude with some experimental results that show the effectiveness of our methods to multiple applications in machine learning and graph mining.
Simple versus Optimal Contracts - Paul Duetting (London School of Economics)
This paper examines contract theory through the theoretical computer science lens, with the goal of developing novel theory to explain and justify the prevalence of relatively simple contracts, such as linear (pure commission) contracts. First, we consider the case where the principal knows only the rst moment of each action’s reward distribution, and we prove that linear contracts are guaranteed to be worst-case optimal, ranging over all reward distributions consistent with the given moments. Second, we study linear contracts from a worst-case approximation perspective, and prove several tight parameterized approximation bounds.
Joint work with Tim Roughgarden and Inbal Talgam-Cohen
A Theory of Spectral Clustering - Luca Trevisan (U.C. Berkeley)
Spectral clustering algorithms find clusters in a given network by exploiting
properties of the eigenvectors of matrices associated with the network. As a
first step, one computes a spectral embedding, that is a mapping of nodes to
points in a low-dimensional real space; then one uses geometric clustering
algorithms such as k-means to cluster the points corresponding to the nodes.
Such algorithms work so well that, in certain applications unrelated to
network analysis, such as image segmentation, it is useful to associate a
network to the data, and then apply spectral clustering to the network. In
addition to its application to clustering, spectral embeddings are a valuable
tool for dimension-reduction and data visualization.
The performance of spectral clustering algorithms has been justified
rigorously when applied to networks coming from certain probabilistic
A more recent development, which is the focus of this lecture, is a
worst-case analysis of spectral clustering, showing that, for every graph
that exhibits a certain cluster structure, such structure can be found by
geometric algorithms applied to a spectral embedding.
Such results generalize the graph Cheeger’s inequality (a classical result
in spectral graph theory), and they have additional applications in
computational complexity theory and in pure mathematics.
*Bio*: Luca Trevisan is a professor of electrical engineering and computer
sciences at U.C. Berkeley and a senior scientist at the Simons Institute for
the Theory of Computing. Luca studied at the Sapienza University of Rome, he
was a post-doc at MIT and at DIMACS, and he was on the faculty of Columbia
University, U.C. Berkeley, and Stanford, before returning to Berkeley in
Luca's research is in theoretical computer science, and it is focused on
computational complexity and graph algorithms.
Luca received the STOC'97 Danny Lewin (best student paper) award, the 2000
Oberwolfach Prize, and the 2000 Sloan Fellowship. He was an invited speaker
at the 2006 International Congress of Mathematicians.
Equal-cost mechanism design with monitoring - Piotr Krysta (University of Liverpool)
We consider the question of whether the transfers of truthful mechanisms can
be defined in a way to make the cost of every agent equal. We want to marry
this natural notion of equal-cost fairness with truthfulness. Given the known
limitations of the Vickrey-Clarke-Groves (VCG) mechanism, which can be cast
as a truthful equal-cost mechanism, we focus on monitoring – a paradigm
wherein the designer can force overbidding agents to pay the reported bid. In
this context, we show how and when approximation, of both optimisation and
money burning objective functions, can be reconciled with this combination of
fairness and truthfulness. We study within this paradigm three broad classes
of optimisation problems: makespan machine scheduling, bottleneck network
design and binary covering problems with social cost minimisation. For each
of these classes we provide close upper and lower bounds on the approximation
guarantees of truthful equal-cost mechanisms with monitoring.
Joint work with Dimitris Fotakis and Carmine Ventre.
Capturing Digital Signals for Lifestyle Health Research - Yelena Mejova (ISI Foundation, Turin)
The scale and complexity of the traces of human behavior captured on the web
has been a useful resource for tracking disease, pushing the lag of
conventional health tracking to more real-time "now-casting". These signals
provide a rich source of information about the context of people's health
conditions, revealing their cultural, social, and personal attitudes and
behaviors, making social media data especially useful for understanding
lifestyle diseases, such as obesity and diabetes -- conditions that claim
more lives than infectious diseases worldwide. This talk will discuss the
latest findings in lifestyle disease tracking via large social media
collections encompassing population and individual scales.
*Bio*: Yelena Mejova is a Research Leader at the ISI Foundation in Turin,
Italy, a part of the Digital Epidemiology group (previously, a scientist at
the Qatar Computing Research Institute, Qatar). Specializing in social media
analysis and mining, her work concerns the quantification of health and
wellbeing signals in social media, as well as tracking of social phenomena,
including politics and news consumption. She co-edited a volume on the use of
Twitter for social science in "Twitter: A Digital Socioscope", and a special
issue of Social Science Computer Review on Quantifying Politics Using Online
Data. Previously, as a postdoctoral member of the Web Mining team at Yahoo!
Research Barcelona, Yelena participated in the Linguistically Motivated
Semantic Aggregation Engines (LiMoSINe) EU project. Also, Yelena has
published widely on sentiment analysis and its application to social media
and political speech.
An Optimal Truthful Mechanism for the Online Weighted Bipartite Matching Problem - Rebecca Reiffenhauser (RWTH Aachen)
In the weighted bipartite matching problem, the goal is to find a maximum-weight matching
with nonnegative edge weights. We consider its online version where the
first vertex set is known beforehand, but vertices of the second set appear one after another.
Vertices of the first set are interpreted as items, and those of the second set as bidders. On
arrival, each bidder vertex reveals the weights of all adjacent edges and the algorithm has to
decide which of those to add to the matching. We introduce an optimal, e-competitive truthful
mechanism under the assumption that bidders arrive in random order (secretary model).
It has been shown that the upper and lower bound of e for the original secretary problem
extends to various other problems even with rich combinatorial structure, one of them being
weighted bipartite matching. But truthful mechanisms so far fall short of reasonable competitive
ratios once respective algorithms deviate from the original, simple threshold form. The best
known mechanism for weighted bipartite matching offers only a ratio logarithmic in the number
of online vertices. We close this gap, showing that truthfulness does not impose any additional