Seminari


2022


Reliable Real-Time Distributed AI for Mobile Autonomous Systems
30/05/2022
The autonomous operations of Unmanned Aerial Vehicles (UAV) require the execution of continuous streams of heavy-duty mission-critical computing tasks. These tasks often take the form of complex Deep Neural Network (DNN) models applied to information-rich signals such as image and lidar data. Clearly, such strenuous effort may exceed the capabilities and resources (e.g., computing power and energy) of most UAVs. The research community mostly relied on two distinct approaches to address this issue: model simplification and edge computing. The former may lead to performance degradation, while the performance of the latter suffer from the erratic behavior of wireless channels. In this talk, I will present two key components of reliable computing in the edge computing for Mobile Autonomous Systems (MAS) and especially UAVs. Our ultimate objective is to achieve task-level latency and accuracy guarantees. First, I will introduce the notion of data-driven redundant computing. The core idea is to replicate tasks across the system to reduce uncertainty in their total execution time. A Deep Reinforcement Learning (DRL) Agent dynamically controls in real-time how many and which edge servers are selected for computing. Our approach is experimental, and we demonstrate how the DRL agent necessitates features from multiple system blocks (telemetry, network and application) to make effective decisions in real-world deployments. Finally, I will summarize our work in the area of split computing, where we modify DNN models for vision to make them splittable across MAS and edge servers and reduce end-to-end latency in unreliable wireless environments. We pioneered this area by introducing the notion of artificial bottleneck to obtain in-model compression, and by developing innovative training strategies that achieve the best rate distortion curve available to date.

2020


A synthesis of behavioural and mainstream economics
03/12/2020
Mainstream economic theory is based on the rationality assumption: thatpeople act as best they can to promote their interests. In contrast,behavioural economics holds that people act by behavioural rules of thumb,often with poor results. We propose a synthesis according to which peopleindeed act by rules, which usually work well, but may work poorly inexceptional or contrived scenarios. The reason is that like physicalfeatures, behavioural rules are the product of evolutionary processes; andevolution works on the usual, the common -- not the exception, not thecontrived scenario.
Analysis and interventions in large network games: graphon games and graphon contagion
26/11/2020
Many of today’s most promising technological systems involve very large numbers of autonomous agents that influence each other and make strategic decisions within a network structure. Examples include opinion dynamics, targeted marketing in social networks, economic exchange and international trade in financial networks, product adoption decisions and social contagion. While traditional tools for network game analysis assumed that a social planner has full knowledge of the network of interactions, when we turn to very large networks two issues emerge. First, collecting data about the exact network of interactions becomes very expensive or not at all possible because of privacy and proprietary concerns. Second, methods for designing optimal interventions that rely on the exact network structure typically do not scale well with the population size. To obviate these issues, in this talk I will present a framework in which the central planner designs interventions based on probabilistic information about agent’s interactions, which can easily be inferred from aggregated data, instead of exact network data. I will introduce the tool of “graphon games” as a way to formally describe strategic interactions in this framework and I will illustrate how this tool can be exploited to design interventions that are robust to stochastic network variations. I will cover two main applications: design of targeted interventions for linear quadratic network games and design of optimal seeding policies for threshold contagion processes. In both cases, I will illustrate how the graphon approach leads to interventions that are asymptotically optimal in terms of the population size and can be easily computed without requiring exact network data.
Keeping Your Friends Close: Land Allocation with Friends
29/10/2020
We examine the problem of assigning plots of land to prospective buyers who prefer living next to their friends. They care not only about the plot they receive, but also about their neighbors. This externality results in a highly non-trivial problem structure, as both friendship and land value play a role in determining agent behavior. We examine mechanisms that guarantee truthful reporting of both land values and friendships. We propose variants of random serial dictatorship (RSD) that can offer both truthfulness and welfare guarantees. Interestingly, our social welfare guarantees are parameterized by the value of friendship: if these values are low, enforcing truthful behavior results in poor welfare guarantees and imposes significant constraints on agents' choices; if they are high, we achieve good approximation to the optimal social welfare. Based on joint work with Neel Patel, Alan Tsang and Yair Zick.
On the characterization of generalized additive games
15/10/2020
Generalized Additive Games (GAGs) are coalitional games where the value of a coalition is calculated as a sum of individual values (provided by players not necessarily in the coalition). In this paper we characterize classes of GAGs that satisfy properties like monotonicity, superadditivity, (total) balancedness, Population Monotonicity Allocation Scheme- (PMAS-)admissibility and supermodularity, for all nonnegative individual values. We also illustrate the application of such conditions over particular GAGs studied in the literature of coalitional games.
Innovation and Strategic Network Formation
08/10/2020
We study a model of innovation with a large number of firms that create new technologiesby combining several discrete ideas. These ideas can be acquired by private investment orvia social learning. Firms face a choice between secrecy, which protects existing intellectualproperty, and openness, which facilitates learning from others. Their decisions determine interaction rates between firms, and these interaction rates enter our model as link proba-bilities in a learning network. Higher interaction rates impose both positive and negative externalities on other firms, as there is more learning but also more competition. We showthat the equilibrium learning network is at a critical threshold between sparse and dense networks. At equilibrium, the positive externality from interaction dominates: the innova-tion rate and even average firm profits would be dramatically higher if the network were denser. So there are large returns to increasing interaction rates above the critical threshold.Nevertheless, several natural types of interventions fail to move the equilibrium away fromcriticality. One policy solution is to introduce informational intermediaries, such as publicinnovators who do not have incentives to be secretive. These intermediaries can facilitate ahigh-innovation equilibrium by transmitting ideas from one private firm to another.
Fattori di qualità nei dati biomedici e clinici - S.T.I.T.C.H. - Sapienza information-based Technology Innovation Center for Health (S.T.I.T.C.H.) Sapienza
07/10/2020
Interventi: L’interoperabilità come chiave del corretto uso del dato medico nella pratica clinica e nella ricerca medico/scientifica (Mauro Giacomini, Università di Genova) Un esempio di cartella clinica per un reparto di cardiologia integrata nel sistema informativo ospedaliero e territoriale (Elena Lazarova, Università di Genova) Un registro standardizzato per la raccolta e il riuso di dati medici per i reparti di malattie infettive liguri (Sara Mora, Università di Genova)
Risk-aversion and diversity in network routing
24/09/2020
n network routing users often tradeoff different objectives in selecting their best route.  An example is transportation networks, where due to uncertainty of travel times, drivers may tradeoff the average travel time versus the variance of a route.  Or they might tradeoff time and cost, such as the cost paid in tolls. We wish to understand the effect of two conflicting criteria in route selection, by studying the resulting traffic assignment (equilibrium) in the network.  We investigate two perspectives of this topic: (1) How does the equilibrium cost of a risk-averse population compare to that of a risk-neutral population?  (i.e., how much longer do we spend in traffic due to being risk-averse) (2) How does the equilibrium cost of a heterogeneous (diverse) population compare to that of a comparable homogeneous user population? We provide characterizations to both questions above.   Based on joint work with Richard Cole, Thanasis Lianeas and Nicolas Stier-Moses.
Sequential Competition and the Strategic Origins of Preferential Attachment
06/02/2020
There exists a wide gap between the predictions of strategic models of network formation and empirical observations of the characteristics of socio-economic networks. Empirical observations underline a complex structure characterized by fat-tailed degree distribution, short average distance, large clustering coefficient and positive assortativity. Game theoretic models offer a detailed representation of individuals’ incentives but they predict the emergence of much simpler structures than these observed empirically. Random network formation processes, such as preferential attachment, provide a much better fit to empirical observations but generally lack micro-foundations. In order to bridge this gap, we propose to model network formation as extensive games and investigate under which conditions equilibria of these games are observationally equivalent with random network formation process. In particular, we introduce a class of games in which players compete with their predecessors and their successors for the costs and benefits induced by link formation. We show that the focal equilibrium that emerges in this setting is one where players use probability distributions with full support and target the whole network with probabilities inversely proportional to the costs and benefits associated to each node. Notably, when the cost is inversely proportional to the degree of a node, equilibrium play induces a preferential attachment process.
Cost Sharing over Combinatorial Domains - Georgios Birmpas (Oxford University)
30/01/2020
We study mechanism design for combinatorial cost sharing. Imagine that multiple items or services are available to be shared among a set of interested agents. The outcome of a mechanism in this setting consists of an assignment, determining for each item the set of players who are granted service, together with respective payments. Although there are several works studying specialized versions of such problems, there has been almost no progress for general combinatorial cost sharing domains until recently [DobzinskiO17]. The main goal of our work is to further understand this interplay in terms of budget balance and social cost approximation. Towards this, we provide a refinement of cross-monotonicity (trace-monotonicity) that is applicable to iterative mechanisms. The trace here refers to the order in which players become finalized. On top of this, we also provide two parameterizations of cost functions which capture the behavior of their average cost-shares. Based on our trace-monotonicity property, we design a scheme of ascending cost sharing mechanisms which is applicable to the combinatorial cost sharing setting with symmetric submodular valuations. Using our first cost function we identify conditions under which our mechanism is weakly group-strategyproof, O(1)-budget-balanced and O(logn)-approximate with respect to the social cost. Finally, we consider general valuation functions and exploit our second parameterization to derive a more fine-grained analysis of the Sequential Mechanism introduced by Moulin. This mechanism is budget balanced by construction, but in general only guarantees a poor social cost approximation of n. We identify conditions under which the mechanism achieves improved social cost approximation guarantees.

2019


Benford’s Law and Robust Statistics for the Detection of Frauds in International Trade, Riccardo Faini CEIS
13/12/2019
The contrast of fraud in international trade is a crucial task of economic regulations within the European Union. The volumes involved are huge and under-reporting of traded values leads to significant depletion of the resources available for the EU and its Member States. We show two complementary approaches to fraud detection in this domain, relying on different assumptions about the fraud generation process. The first and more established path is based on robust statistical methods for linear regression and clustering. The second emerging approach is based on testing conformance of transactions digits to Benford’s law. For the latter, we also address the problem of reducing the false-positive rate and to develop corrected goodness-of-fit tests when Benford’s law does not hold. This talk describes joint work with several people, including Lucio Barabesi, Domenico Perrotta, Marco Riani and the research staff at the Joint Research Centre of the European Commission (JRC). The motivation of this research line comes from a longstanding collaboration of the JRC with the Anti-fraud Office of the European Union (OLAF), supported by the HERCULE Programme of the European Commission
Two quantitative studies concerning voting systems - Allan Borodin (University of Toronto)
12/12/2019
There is a current societal interest in social choice theory (e.g. voting systems) motivated at least in part by the divisiveness apparent in many countries. Can computer science and AI bring any insight into some of the more prominent issues? More specifically, recent elections in the US and Canada (and elsewhere) have brought to prominence a couple of issues, Gerrymamdering and the role of primaries in voting. (Voting goes beyond political elections but political elections are the most newsworthy.) As one can imagine, there are complex modeling and computational issues when dealing with large social and political systems. We have some preliminary but we think interesting insights into gerrymandering (strategic design of electoral districts) and primary systems. I will briefly talk about a work at IJCAI 2018 on gerrymandering and how the ``power of gerrymadering'' relates to the degree of urbanization. I will mainly talk about the issue of primaries vs direct elections as our work here is a blend of both theory and experimental work. Here is the basic question: What is the impact on the ``quality'' of our chosen leaders by having primaries where each party has its own election to choose their candidate for the general election? Does this tend to result in more extreme candidates? In a paper at AAAI 2019 paper on primaries, we conduct the first quantitative study of primary vs direct elections.
Graph Clustering with noisy queries and motifs - Charalampos Tsourakakis (Boston University and Harvard University)
09/12/2019
Graph clustering is a fundamental problem in graph mining with important applications ranging from social networks analysis and entity resolution to computer vision and semi-supervised learning. In this talk I will discuss two novel variations of graph clustering. In the first part of the talk I will discuss the following clustering problem: there exist two latent clusters, and we are allowed to query any pair of nodes whether they belong to the same cluster or not, but the answer to the query is corrupted with some probability less than 1/2. Can we recover the clusters with high probability and what is the minimum number of queries needed? I will present two state-of-the-art algorithms and a closely related application on predicting signs in online social networks. Our results improve recent results by Mazumdar and Saha [NIPS’17]. In the second part of the talk, I will discuss how to exploit motifs to uncover communities. Specifically, I will introduce the concept of motif-expanders that serves as the basis for motif-based clustering. I will present efficient algorithms and a well-performing heuristic that outperforms some of the most popular graph clustering tools. I will conclude with some experimental results that show the effectiveness of our methods to multiple applications in machine learning and graph mining.
Simple versus Optimal Contracts - Paul Duetting (London School of Economics)
29/03/2019
This paper examines contract theory through the theoretical computer science lens, with the goal of developing novel theory to explain and justify the prevalence of relatively simple contracts, such as linear (pure commission) contracts. First, we consider the case where the principal knows only the rst moment of each action’s reward distribution, and we prove that linear contracts are guaranteed to be worst-case optimal, ranging over all reward distributions consistent with the given moments. Second, we study linear contracts from a worst-case approximation perspective, and prove several tight parameterized approximation bounds. Joint work with Tim Roughgarden and Inbal Talgam-Cohen

2018


An Optimal Truthful Mechanism for the Online Weighted Bipartite Matching Problem - Rebecca Reiffenhauser (RWTH Aachen)
12/10/2018
In the weighted bipartite matching problem, the goal is to find a maximum-weight matching with nonnegative edge weights. We consider its online version where the first vertex set is known beforehand, but vertices of the second set appear one after another. Vertices of the first set are interpreted as items, and those of the second set as bidders. On arrival, each bidder vertex reveals the weights of all adjacent edges and the algorithm has to decide which of those to add to the matching. We introduce an optimal, e-competitive truthful mechanism under the assumption that bidders arrive in random order (secretary model). It has been shown that the upper and lower bound of e for the original secretary problem extends to various other problems even with rich combinatorial structure, one of them being weighted bipartite matching. But truthful mechanisms so far fall short of reasonable competitive ratios once respective algorithms deviate from the original, simple threshold form. The best known mechanism for weighted bipartite matching offers only a ratio logarithmic in the number of online vertices. We close this gap, showing that truthfulness does not impose any additional bounds.
Capturing Digital Signals for Lifestyle Health Research - Yelena Mejova (ISI Foundation, Turin)
13/10/2018
The scale and complexity of the traces of human behavior captured on the web has been a useful resource for tracking disease, pushing the lag of conventional health tracking to more real-time "now-casting". These signals provide a rich source of information about the context of people's health conditions, revealing their cultural, social, and personal attitudes and behaviors, making social media data especially useful for understanding lifestyle diseases, such as obesity and diabetes -- conditions that claim more lives than infectious diseases worldwide. This talk will discuss the latest findings in lifestyle disease tracking via large social media collections encompassing population and individual scales. *Bio*: Yelena Mejova is a Research Leader at the ISI Foundation in Turin, Italy, a part of the Digital Epidemiology group (previously, a scientist at the Qatar Computing Research Institute, Qatar). Specializing in social media analysis and mining, her work concerns the quantification of health and wellbeing signals in social media, as well as tracking of social phenomena, including politics and news consumption. She co-edited a volume on the use of Twitter for social science in "Twitter: A Digital Socioscope", and a special issue of Social Science Computer Review on Quantifying Politics Using Online Data. Previously, as a postdoctoral member of the Web Mining team at Yahoo! Research Barcelona, Yelena participated in the Linguistically Motivated Semantic Aggregation Engines (LiMoSINe) EU project. Also, Yelena has published widely on sentiment analysis and its application to social media and political speech.
A Theory of Spectral Clustering - Luca Trevisan (U.C. Berkeley)
18/10/2018
Spectral clustering algorithms find clusters in a given network by exploiting properties of the eigenvectors of matrices associated with the network. As a first step, one computes a spectral embedding, that is a mapping of nodes to points in a low-dimensional real space; then one uses geometric clustering algorithms such as k-means to cluster the points corresponding to the nodes. Such algorithms work so well that, in certain applications unrelated to network analysis, such as image segmentation, it is useful to associate a network to the data, and then apply spectral clustering to the network. In addition to its application to clustering, spectral embeddings are a valuable tool for dimension-reduction and data visualization. The performance of spectral clustering algorithms has been justified rigorously when applied to networks coming from certain probabilistic generative models. A more recent development, which is the focus of this lecture, is a worst-case analysis of spectral clustering, showing that, for every graph that exhibits a certain cluster structure, such structure can be found by geometric algorithms applied to a spectral embedding. Such results generalize the graph Cheeger’s inequality (a classical result in spectral graph theory), and they have additional applications in computational complexity theory and in pure mathematics. *Bio*: Luca Trevisan is a professor of electrical engineering and computer sciences at U.C. Berkeley and a senior scientist at the Simons Institute for the Theory of Computing. Luca studied at the Sapienza University of Rome, he was a post-doc at MIT and at DIMACS, and he was on the faculty of Columbia University, U.C. Berkeley, and Stanford, before returning to Berkeley in 2014. Luca's research is in theoretical computer science, and it is focused on computational complexity and graph algorithms. Luca received the STOC'97 Danny Lewin (best student paper) award, the 2000 Oberwolfach Prize, and the 2000 Sloan Fellowship. He was an invited speaker at the 2006 International Congress of Mathematicians.
Equal-cost mechanism design with monitoring - Piotr Krysta (University of Liverpool)
29/11/2018
We consider the question of whether the transfers of truthful mechanisms can be defined in a way to make the cost of every agent equal. We want to marry this natural notion of equal-cost fairness with truthfulness. Given the known limitations of the Vickrey-Clarke-Groves (VCG) mechanism, which can be cast as a truthful equal-cost mechanism, we focus on monitoring – a paradigm wherein the designer can force overbidding agents to pay the reported bid. In this context, we show how and when approximation, of both optimisation and money burning objective functions, can be reconciled with this combination of fairness and truthfulness. We study within this paradigm three broad classes of optimisation problems: makespan machine scheduling, bottleneck network design and binary covering problems with social cost minimisation. For each of these classes we provide close upper and lower bounds on the approximation guarantees of truthful equal-cost mechanisms with monitoring. Joint work with Dimitris Fotakis and Carmine Ventre.

© Università degli Studi di Roma "La Sapienza" - Piazzale Aldo Moro 5, 00185 Roma