Landscape and Training Dynamics of DNNs: lessons from physicsinspired methods
13/02/2020
Despite their formidable success in recent years, a fundamental understanding of deep neural networks (DNNs) is still lacking. Open questions include the origin of the slowness of the training dynamics, and the relationship between the dimensionality of parameter space and number of training examples, since DNNs empirically generalize very well even when overparametrized. A popular way to address these issues is to study the topology of the cost function (the loss landscape) and the properties of the algorithm used for training (usually stochastic gradient descent, SGD).
Here, we use methods and results coming from the physics of disordered systems, in particular glasses and sphere packings. On one hand, we are able to understand to what extent DNNs resemble widely studied physical systems. On the other hand, we use this knowledge to identify properties of the learning dynamics and of the landscape.
In particular, through the study of time correlation functions in weight space, we argue that the slow dynamics is not due to barrier crossing, but rather to an increasingly large number of nullgradient directions, and we show that, at the end of learning, the system is diffusing at the bottom of the landscape. We also find that DNNs exhibit a phase transition between over and underparametrized regimes, where perfect fitting can or cannot be achieved. We show that in this overparametrized phase there cannot be spurious local minima. In the vicinity of this transition, properties of the curvature of the loss function minima are critical.
This kind of knowledge can be used both as a basis for a more grounded understanding of DNNs and for handson requirements such as hyperparameter optimization and model selection.

Fences and RMRs Required for Synchronization
24/02/2020
Compiler optimizations that execute memory accesses out of (program) order often lead to incorrect execution of concurrent programs. These reorderings are prohibited by inserting costly fence (memory barrier) instructions. The inherent Fence Complexity is a good estimate of an algorithm's time complexity, as is its RMR complexity: the number of Remote Memory References the algorithm must issue.
When write instructions are executed in order, as in the Total Store Order (TSO) model, it is possible to implement a lock (and other objects) using only one RAW fence and an optimal O(n log n) RMR complexity. However, when store instructions may be reordered, as in the Partial Store Order (PSO) model, we prove that there is an inherent tradeoff between fence and RMR complexities.
The proof relies on an interesting encoding argument.

Network archeology: on revealing the past of random trees
10/02/2020
Networks are often naturally modeled by random processes in which nodes of the network are added onebyone, according to some random rule. Uniform and preferential attachment trees are among the simplest examples of such dynamically growing networks. The statistical problems we address in this talk regard discovering the past of the network when a presentday snapshot is observed. Such problems are sometimes termed "network archeology". We present a few results that show that, even in gigantic networks, a lot of information is preserved from the very early days.
Gabor Lugosi is an ICREA research professor at the Department of Economics, Pompeu Fabra University, Barcelona. He graduated in electrical engineering at the Technical University of Budapest in 1987, and received his Ph.D. from the Hungarian Academy of Sciences in 1991. His research main interests include the theory of machine learning, combinatorial statistics, inequalities in probability, random graphs and random structures, and information theory.

Model Based Design of Safety and Mission Critical Systems
19/02/2020
Many software based control systems are indeed safety or mission critical systems. Examples are: aerospace, transportation, medical devices, financial systems.
In this talk I will outline my research activity as for model based synthesis and verification of software based control systems and show how the general purpose algorithms and tools developed have been used in specific application domains.

Safe and Efficient Exploration in Reinforcement Learning
05/02/2020
At the heart of Reinforcement Learning lies the challenge of trading exploration  collecting data for identifying better models  and exploitation  using the estimate to make decisions. In simulated environments (e.g., games), exploration is primarily a computational concern. In realworld settings, exploration is costly, and a potentially dangerous proposition, as it requires experimenting with actions that have unknown consequences. In this talk, I will present our work towards rigorously reasoning about safety of exploration in reinforcement learning. I will discuss a modelfree approach, where we seek to optimize an unknown reward function subject to unknown constraints. Both reward and constraints are revealed through noisy experiments, and safety requires that no infeasible action is chosen at any point. I will also discuss modelbased approaches, where we learn about system dynamics through exploration, yet need to verify safety of the estimated policy. Our approaches use Bayesian inference over the objective, constraints and dynamics, and  under some regularity conditions  are guaranteed to be both safe and complete, i.e., converge to a natural notion of reachable optimum. I will also present recent results harnessing the model uncertainty for improving efficiency of exploration, and show experiments on safely and efficiently tuning cyberphysical systems in a datadriven manner.
Andreas Krause is a Professor of Computer Science at ETH Zurich, where he leads the Learning & Adaptive Systems Group. He also serves as Academic CoDirector of the Swiss Data Science Center. Before that he was an Assistant Professor of Computer Science at Caltech. He received his Ph.D. in Computer Science from Carnegie Mellon University (2008) and his Diplom in Computer Science and Mathematics from the Technical University of Munich, Germany (2004). He is a Microsoft Research Faculty Fellow and a Kavli Frontiers Fellow of the US National Academy of Sciences. He received ERC Starting Investigator and ERC Consolidator grants, the Deutscher Mustererkennungspreis, an NSF CAREER award, the Okawa Foundation Research Grant recognizing top young researchers in telecommunications as well as the ETH Golden Owl teaching award. His research on machine learning and adaptive systems has received awards at several premier conferences and journals, including the ACM SIGKDD Test of Time award 2019. Andreas Krause served as Program CoChair for ICML 2018, and is regularly serving as Area Chair or Senior Program Committee member for ICML, NeurIPS, AAAI and IJCAI, and as Action Editor for the Journal of Machine Learning Research.

Automatic Synthesis of Controllers for CyberPhysical Systems
28/01/2020
A CyberPhysical System (sometimes also called hybrid system) is composed of physical subsystems and software subsystems. Many CyberPhysical Systems are indeed control systems: the software part is designed to control the physical part, so that some desired behavior is achieved. Applications of such CyberPhysical Control Systems are ubiquitous: smart grids, electrical engineering, aerospace, automotive, biology, and so on.
Recently, many methodologies have been presented on automatically synthesizing controllers for CyberPhysical Systems. Such methodologies take as input a description of the physical part (plant) of a CyberPhysical Control System, a set of requirements for the software part (controller), a set of desired behaviors for the closedloop system (controller + plant), and output the actual software for the controller, which is guaranteed to meet all given specifications.
In this talk, I will present a selection of such methodologies, mainly focusing on my own contributions.

Adaptive Communication for BatteryFree IoT devices
28/01/2020
With the evergrowing usage of batteries in the IoT era, the need for more ecofriendly technologies is clear. RFpowered computing enables the redesign of personal computing devices in a batteryless manner. While there has been substantial work on the underlying methods for RFpowered computing, practical applications of this technology has largely been limited to scenarios that involve simple tasks. This talk discusses how RFID technology, typically used to implement object identification and counting, can be exploited to realize a batteryfree smart home. In particular, this talk considers the coexistence of several batteryfree devices, with different transmission requirements  periodic, eventbased, and realtime  and proposes a new approach to dynamically collect information from devices without requiring any a priori knowledge of the environment.

Learning to rank results optimally in search and recommendation engines
17/01/2020
Consider the scenario where an algorithm is given a context, and then it must select a slate of relevant results to display. As four special cases, the context may be a search query, a slot for an advertisement, a social media user, or an opportunity to show recommendations. We want to compare many alternative ranking functions that select results in different ways. However, A/B testing with traffic from real users is expensive. This research provides a method to use traffic that was exposed to a past ranking function to obtain an unbiased estimate of the utility of a hypothetical new ranking function. The method is a purely offline computation, and relies on assumptions that are quite reasonable. We show further how to design a ranking function that is the best possible, given the same assumptions. Learning optimal rankings for search results given queries is a special case. Experimental findings on data logged by a realworld ecommerce web site are positive.

Corso di 24 ore di "Scrittura tecnicoscientifica" (corso in lingua italiana)
28/01/2020
