Offerta formativa erogata 2024/2025

Teacher: Guillem Rigaill

Title: An introduction to (Multiple) Changepoint detection
When: Monday, Feb 24 - Thursday, Feb 27 2025

Abstract
In recent years, there has been a proliferation of methods for detecting changepoints (also known as breakpoints or structural breaks) in data streams. This surge has been driven by the wide range of applications where changepoint methods are needed, including genomics, neuroscience, climate science, finance, and econometrics, among others. This course serves as an introduction to multiple changepoint detection methods.
This course will first address the simpler task of detecting a single changepoint in the mean of a univariate data stream. This is crucial for understanding several state-of-the-art approaches designed for detecting multiple changepoints. Subsequently, we will delve into the fundamentals of two classical approaches for multiple changepoint detection: (1) binary segmentation and (2) dynamic programming. We will review their statistical and computational properties and explain some of their recent improvements.
We will illustrate the application of these approaches to genomic datasets using the R programming language.



Teacher: Maria Sofia Bucarelli

Title: Deep Learning Theory and Robustness
When: 3-6 June

Abstract:
This four-day course offers a brief exploration of the theoretical foundations of deep learning, highlighting key topics such as robustness to noisy labels, modern learning theory, and the expressive power of deep networks. In particular, we discuss strategies for handling noisy labels and revisit classical learning theory in the context of large-scale neural networks. Attention is paid to phenomena such as double descent (with a brief discussion of kernel regimes if time permits), as well as the critical role of network depth for their representational power. The course also briefly addresses properties of the loss landscape, implicit regularization, lazy training, and the emerging practice of model merging. In the final lesson, participants are introduced to diffusion-based generative modeling and flow matching.



Teacher: Pasquale Minervini

Title: NLP/LLM crash-course
When: Settembre - Ottobre 2025

Abstract:
This mini-course explores recent advancements in Natural Language Processing (NLP), focusing on the development and implications of large language models (LLMs). The curriculum begins by tracing the evolution of NLP technologies, from early neural models to the transformative impact of deep learning architectures like Transformers. Much of the course is dedicated to exploring large language models, detailing their design, training methodologies, and the emerging paradigm of scaling laws in AI. Further, the course will cover popular LLM alignment strategies, namely Instruction Fine-Tuning and Reinforcement Learning from Human Feedback (RLHF), two techniques for aligning LLM outputs with human values and preferences. Students will learn about the theoretical underpinnings of RLHF, its implementation challenges, and its role in enhancing the reliability and ethical grounding of model responses. Additionally, the course covers Retrieval-Augmented Generation (RAG), integrating traditional NLP tasks with innovative approaches to improve information relevance and accuracy through dynamic content retrieval mechanisms.



Teacher: Claudio Gallicchio / Andrea Ceni

Title: Deep learning for time-series (Claudio Gallicchio / Andrea Ceni)
When: Second half of October

Abstract: This course offers a systematic exploration of deep learning methodologies for time-series analysis. It covers fundamental concepts of deep architectures, followed by recurrent and convolutional frameworks tailored to temporal data, and integrates randomization-based and reservoir computing approaches. Attention mechanisms and transformer models are introduced, with an emphasis on capturing long-range dependencies, and state space models are discussed in relation to sequential inference. Real-world applications contextualize theoretical insights and algorithmic formulations.

© Università degli Studi di Roma "La Sapienza" - Piazzale Aldo Moro 5, 00185 Roma