Multi-agent systems: from robotic to financial networks
08/02/2021
A team of drones looking for survivors amidst the rubbles of an earthquake, a squad of mobile robots inspecting a field of interest, and algorithmic traders in a financial network apparently have little in common. In all these scenarios, however, the interplay among the multiple actors involved can be abstracted as a complex network of multiple interacting agents, with cooperative, opportunistic, or even antagonistic behaviors.
In drone networks, multiple agents, mostly unmanned mobile devices, cooperate with each other to achieve a common goal or share a common view of ongoing phenomena in a field of interest. They may act opportunistically, trading-off their application task with their communication needs. Open research challenges include trajectory management, task assignment, and routing issues.
In financial systems, traders, banks, and the stock exchange can also be abstracted in a model of interacting agents, often exhibiting interdependencies that can be studied with the tools of network science and multi-agent systems.
This short talk sets out to describe some of the research goals I have enthusiastically pursued in the recent years, with a focus on the funded projects I have been working on.
|
Chatbots for software modelling
14/12/2021
Chatbots are software services accessed via conversation in natural
language. They are used to help in all kinds of procedures like booking
flights, querying visa information or assigning tasks to developers.
They can be embedded in webs and social networks, and be used from
mobile devices without installing dedicated apps.
In this seminar, we will see how to take advantage of chatbots and
social networks to enable the collaborative creation of software models
by groups of users. The process is assisted by modelling bots that
orchestrate the collaboration and interpret the users' inputs (in
natural language) to incrementally build a domain model. The advantages
of this modelling approach include ubiquity of use, automation,
assistance, natural user interaction, traceability of design decisions,
possibility to incorporate coordination protocols, and seamless
integration with the user's normal daily usage of social networks. We
will showcase the tool SOCIO which supports this novel modelling paradigm.
|
Toward Data-Driven Self-Adaptive Spectrum-Aware Wireless Systems
06/12/2021
The massive scale and strict performance requirements of next-generation wireless networks will require embedded devices to perform real-time fine-grained optimization of their spectrum usage. Yet, today's networking protocols and architectures are deeply rooted in inflexible designs, and utilize optimization models and strategies that are either too complex or too oversimplified to be fully effective in today's crowded spectrum environment. In this talk, we are going to introduce and discuss our recent research toward the design of data-driven self-adaptive spectrum-aware wireless systems, where transmitters and receivers use real-time deep learning to infer and optimize their networking parameters based on ongoing spectrum conditions. We will conclude the talk by discussing existing technical challenges and possible research directions.
|
Order! A tale of money, intrigue, and specifications
02/12/2021
Mistrust over traditional financial institutions is motivating the development of decentralized financial infrastructures based on blockchains. In particular, Consortium blockchains (such as the Linux Foundation Hyperledger and Facebook’s diem) are emerging as the approach preferred by businesses. These systems allow only a well-known set of mutually distrustful parties to add blocks to the blockchain; in this way, they aim to retain the benefits of decentralization without embracing the cyberpunk philosophy that informed Nakamoto’s disruptive vision.
At the core of consortium blockchains is State Machine Replication, a classic technique borrowed from fault tolerant distributed computing; to ensure the robustness of their infrastructure, consortium blockchains actually borrow the Byzantine-tolerant version of this technique, which guarantees that the blockchain will operate correctly even if as many as about a third of the contributing parties are bent on cheating. But, sometimes, "a borrowing is a sorrowing".
I will discuss why Byzantine-tolerant state machine replication is fundamentally incapable of recognizing, never mind preventing, an ever present scourge of financial exchanges: the fraudulent manipulation of the order in which transactions are processed - and how its specification needs to be expanded to give it a fighting chance.
But is it possible to completely eliminate the ability of Byzantine parties to engage in order manipulation? What meaningful ordering guarantees can be enforced? And at what cost?
|
Learning and accruing knowledge over time using modular architectures
07/10/2021
One of the hallmarks of human intelligence is the ability to learn new tasks despite the paucity of direct supervision. Machine learning models have recently achieved impressive performance in this setting by using the following protocol: i) Collect a massive dataset, ii) Train a very large model and iii) Adapt to downstream tasks using very little, if any, task-specific labeled data. While this has been working remarkably well, it is still dissatisfying because the information present in each downstream task is never transformed into actual knowledge that can be leveraged to improve the prediction of subsequent downstream tasks. As a result, once in a while even larger models need to be retrained from scratch to account for the ever increasing amount of data.
This begs two basic questions. First, what learning settings are useful to study knowledge accrual? And second, what methods are effective and efficient at learning from never-ending streams of data? In this talk, I will present a preliminary investigation in our quest to answer these questions. I will present experiments using anytime and continual learning with metrics accounting for both error rate and efficiency of learning through time.
I will also discuss how modular architectures can strike good trade-offs in this setting. These networks, whose computation is expressed as the composition of basic modules, can naturally grow over time to account for new incoming data by simply adding new modules to the existing set of modules, and they can retain efficiency as the number of modules grow if only a small and constant number of modules is used at inference time. While these are admittedly baby steps towards our original goal, we hope to stimulate discussion and interest in our community about the fundamental question of how to represent and accrue knowledge over time.
|
The Pit and the Pendulum - Part II
01/10/2021
In the second seminar, I will discuss the tension between providing strong isolation guarantees (which greatly simplify the task of programming concurrent applications) and trying to maximize these applications' performance. Since the elegant foundations of transaction processing were established in the mid 70's with the notion of serializability and the codification of the ACID (Atomicity, Consistency, Isolation, Durability) paradigm, performance has not been considered one of ACID's strong suits, especially for distributed data stores. Indeed, the NoSQL/BASE movement that started a decade ago with Amazon's Dynamo was born out of frustration with the limited scalability of traditional ACID solutions, only to become itself a source of frustration once the challenges of programming applications in this new paradigm began to sink in. But how fundamental is this dichotomy between performance and ease of programming? In my talk, I'll share with you the intellectual journey my students and embarked on trying to overcome the traditional terms of this classic tradeoff.
|
The Pit and the Pendulum - Part I
30/09/2021
The cloud datastores that support today's service economy offer applications the ability to program using a transactional interface. Transactions are groupings of operations that take effect atomically: either all operations take effect or none do. They simplify program development as they allow developers to group related operations into one single atomic unit. For performance, modern datastores allow multiple transactions to execute concurrently. Isolation then defines a contract that regulates the interaction between these concurrent transactions. Indeed, isolation is important also in many machine learning algorithms that iteratively transform some global state, such as model parameters or variable assignment. When these updates are structured as transactions, they can be executed concurrently to achieve greater scalability, relying on isolation to maintain the semantics and theoretical properties of the original serial algorithm.
But what guarantees should isolation offer? And how expensive is it to enforce them?
In my first seminar, I will discuss the fascinating history of our community's attempts at formalizing isolation. You'll meet giants like Jim Gray and Barbara Liskov, Turing award winners who wrestled with this challenge, and you'll see what you think about our recent attempt to venture where such giants have trod.
|
Knowledge Discovery from Graphs and Networks
21/09/2021
In this talk, I will present some recent research directions that I have been exploring. One common denominator is the notion of graph or network. I'll start by describing my activities in the field of knowledge graphs (KG), graphs organized as a set of triples of the form (subject, predicate, object), where the predicate denotes some semantic relationship between the subject and the object (e.g., Stanley Kubrick, director, A Clockwork Orange). I'll discuss why existing approaches to learning low-level representations (or embeddings) for subject/object and predicates are sub-optimal when it comes to learning representations of triples as a whole; I'll show how to transform a KG into its triple-centric version by taking into account the semantics edges. Hence, I will describe two triple embedding learning architectures useful for downstream tasks such as triple verification; one based on biased random walks and the other based on graph neural networks. Next, I will discuss how to improve any existing semantic-oblivion embedding approach based on random walks by superimposing an abstract notion of neighborhood, based on an arbitrary node similarity measure. Finally, in the landscape of networks, I will describe ongoing research activities on a new topic called community deception, which is about how to hide a community (set of nodes) from social network analysis tools. I'll discuss some techniques based on carefully selected edge updates and their extension to attributed networks.
|
Abstractions and Their Compilers
16/09/2021
An abstraction in computer science is a data model plus a "programming language"; the language is often far simpler than a general-purpose programming language. We shall consider four different ways that abstractions have been used. Especially important are "declarative abstractions," where you say what you want done but not how to do it. These abstractions require clever compilation, including some powerful optimization techniques, if they are to be used in practice. We shall talk about three such declarative abstractions: regular expressions, and their compilation into finite automata, context-free grammars and their compilation into shift-reduce parsers, and the relational model of data, and its compilation into executable code.
|
From Computational Argumentation to Explanation
09/06/2021
Computational argumentation is a well-established field in (mostly symbolic) AI focusing on defining argumentation frameworks comprising sets of arguments and dialectical relations between them (e.g. of attack and, in addition or instead, of support), as well as so-called semantics (e.g. amounting to definitions of dialectically acceptable sets of arguments or of dialectical strength of arguments, satisfying desirable dialectical properties such as that supports against an argument should strengthen it). In this talk I will overview our recent efforts towards deploying computational argumentation to obtain and deliver to users explanations of different formats for a variety of systems, including data-driven classifiers. I will also argue that explainable AI (XAI) , which has witnessed unprecedented growth in AI in recent years, can be ideally supported by computational argumentation models whose dialectical nature matches well some basic desirable features of explanatory activities.
|
|