The Bug The Better: Mining Bugs in Complex Programs
10/12/2024 10:00, Aula L1, Via del Castro Laurenziano 7a, Roma
Speaker: Flavio Toffalini (Ruhr-Universität Bochum)
Adversaries continuously exploit vulnerabilities to compromise systems, such as crafting malicious JavaScript programs to hijack Web browsers and obtain remote execution. The most effective strategy for preventing such exploitation, and enhancing system security, is identifying and patching bugs. However, discovering vulnerabilities in modern systems requires facing scalability issues, and dealing with emerging attack surfaces.
This presentation will explore cutting-edge advancements in automated software testing, focusing on techniques to maximize the detection of security-critical bugs. Additionally, we will examine new challenges, such as errors injected by compilers into secure code, logic errors in Java programs, and erroneous code optimization in JavaScript engines.
|
A journey into pytorch, the ecosystem, and deep learning compilers
11/11/2024 14:00, Aula Magna, DIAG, Via Ariosto 25, Roma.
Speaker: Luca Antiga (Lightning AI)
PyTorch has become a key building block of modern AI. In this talk, we'll explore its journey from the early days, through the growth of its ecosystem and the pivotal role of open source, all the way to the recent rise of deep learning compilers. We'll dive into the technical aspects of compiler technologies and discuss how they are going to shape the future of AI infrastructure.
|
A journey into pytorch, the ecosystem, and deep learning compilers
11/11/2024 14:00, Aula Magna, DIAG, Via Ariosto 25, Roma.
Speaker: Luca Antiga (Lightning AI)
PyTorch has become a key building block of modern AI. In this talk, we'll explore its journey from the early days, through the growth of its ecosystem and the pivotal role of open source, all the way to the recent rise of deep learning compilers. We'll dive into the technical aspects of compiler technologies and discuss how they are going to shape the future of AI infrastructure.
|
Retrieval Augmented Generation (RAG): Applications, Limitations, and Future Directions
22/10/2024 15:00, Aula Magna, DIAG, Via Ariosto 25, Roma.
Speaker: Fabio Petroni (Samaya AI)
Retrieval Augmented Generation (RAG) is a technique we proposed in 2020 that
allows generative AI models to access external information, enhancing their
responses to prompts. Since then, the popularity of this approach has
skyrocketed, becoming the de facto standard for handling knowledge-intensive
tasks in both academia and industry. In this talk, I will describe various
applications of RAG, including improving Wikipedia verifiability and
providing a glimpse into the work we’re doing at Samaya AI. I will then
discuss some limitations of this architecture, such as the “lost in the
middle” effect, and conclude by outlining future research directions that I
find most exciting.
|
Parallelizing GPU-based Mini-Batch Graph Neural Network Training
3/7/2024 11:30, Aula Magna, DIAG, Via Ariosto 25, Roma.
Speaker: Marco Serafini (UMass Amherst)
Many datasets are best represented as graphs of entities connected by relationships rather than as a single uniform dataset or table. Graph Neural Networks (GNNs) have been used to achieve state-of-the-art performance in tasks such as classification and link prediction. This talk will discuss recent research on scalable GNN training.
The talk will focus on the popular mini-batch approach to GNN training, where each iteration consists of three steps: sampling the k-hop neighbors of the mini-batch, loading the samples onto the GPUs, and training. The first part of the talk will discuss NextDoor, which showed for the first time that we can significantly speed up end-to-end GNN training by using GPU-based sampling. To maximize the utilization of GPU resources and speed up sampling, NextDoor proposes a new form of parallelism, called transit parallelism. The second part of the talk focuses on a new approach called split parallelism to run the entire mini-batch training pipeline on GPUs. It presents a system called GSplit that avoids redundant data loads and has all GPUs perform sampling and training cooperatively on the same GPU. Finally, the last part of the talk will discuss results from an experimental comparison between full-graph and mini-batch training systems.
|
Fighting against cyber threats from a system perspective
11/6/2024 12:00, Aula A7, DIAG, Via Ariosto 25, Roma
Speaker: David Bromberg (Univ. of Rennes IRISA)
Cyber attacks have now invaded our daily lives. According to a report by the European police agency Europol, cybercrime threats are exploding in Europe. Not a day goes by without discovering that an institution or a company has been attacked. In this talk we will explore how research in systems and distributed systems may improve the resilience to cyber attacks following 3 axes targeting mobile systems, distributed systems, and operating systems; (I) The astonishingly widespread adoption of the Android operating system has been accompanied by the spread of malware across the Android ecosystem at an alarming rate leading to study how to strengthen the robustness of mobile systems such as Android; (II) Peer sampling is a key component of distributed systems for overlay management and information dissemination. It is regularly challenged by Byzantine nodes, leading to a revisiting of the field by introducing new algorithms and investigating how SGX hardware enclaves can improve resilience to threats. (III) A significant amount of research focuses on defending against cyber attacks such as ransomware but little on getting systems back up and running once they have been attacked.
In this talk, we will explore specifically the first axe.
|
Leveraging Textual Specifications for Automated Attack Discovery in Network Protocols
28/5/2024 12:00, Aula B2, DIAG, Via Ariosto 25, Roma
Speaker: Cristina Nita-Rotaru (Northeastern University)
Automated attack discovery techniques, such as attacker synthesis or model-based fuzzing, provide powerful ways to ensure network protocols operate correctly and securely. Such techniques, in general, require a formal representation of the protocol, often in the form of a finite state machine (FSM). Unfortunately, many protocols are only described in English prose. We show how to extract protocol specification in the form of FSM from RFCs. Unlike other works that rely on rule-based approaches or use off-the-shelf NLP tools directly, we suggest a data-driven approach for extracting FSMs from RFC documents. Specifically, we use a hybrid approach consisting of three key steps: (1) large-scale word-representation learning for technical language, (2) focused zero-shot learning for mapping protocol text to a protocol-independent information language, and (3) rule-based mapping from protocol-independent information to a specific protocol FSM. We show the generalizability of our FSM extraction by using the RFCs for six different protocols: BGPv4, DCCP, LTP, PPTP, SCTP and TCP. We demonstrate how automated extraction of an FSM from an RFC can be applied to the synthesis of attacks, with TCP and DCCP as case-studies. This work appeared in IEEE Security and Privacy 2022 as``Automated Attack Synthesis by Extracting Finite State Machines from Protocol Specification Documents.'' Maria Leonor Pacheco, Max von Hippel, Ben Weintraub Dan Goldwasser Cristina Nita-Rotaru. IEEE S&P 2022.Code available at: https://github.com/RFCNLP
|
Taming the Cost of Deep Neural Models: Hybrid Models to the Rescue?
16/5/2022 14:30, Aula Magna, DIAG, Via Ariosto 25, Roma
Speaker: Laks V.S. Lakshmanan (UBC Vancouver)
Deep learning, and in particular, large language models have made great strides in many fields including vision, language, and medicine. The impressive performance of large models comes at a significant price: the models tend to be billions to trillions of parameters in size, are expensive to train, have a huge operational cost, and typically need cloud service for deployment. Meanwhile, considerable research efforts have been devoted to designing smaller/cheaper models, at the price of restricted generalizability and performance. Not all queries we may wish to pose to a model are hard. Some queries can be answered nearly as accurately with cheaper models at a fraction of the cost of the larger models. However, the performance of cheaper models may suffer on other queries. Can we combine the best of both worlds by striking a balance between cost and performance? In this talk, I will describe two settings in which our group has tackled this issue.
In the first setting, we are interested in approximate answers to queries over model predictions. We show how, under some assumptions about the cheap model, queries can be answered with a provably high precision or recall by using a judicious combination of invoking the large model on data samples and the cheap model on data objects. In the second setting, we are interested in learning a router, which, given a query, predicts its level of hardness, based on which the query is either routed to the small model or to the large model. For both settings, results of extensive experiments show the effectiveness and efficiency of our approach
|
Securing Data in the Cyberspace: Challenges and Emerging Solutions
10/5/2022 14:30, Aula B203, DIAG, Via Ariosto 25, Roma
Speaker: Ivan Visconti (Univ. Salerno)
In this talk, I'll discuss significant challenges in data protection due to current and future threats. I'll present recent research results that, leveraging advanced cryptographic tools, provide new defenses in several domains. In particular, I'll describe efficient zero-knowledge proofs and their applications to:
a) detecting deep fakes/disinformation through novel image authentication mechanisms;
b) long-term data protection via post-quantum security;
c) data sanitization in tough scenarios (blockchains/AI).
(online: https://uniroma1.zoom.us/j/84550558864?pwd=UTdtNElrWHdWVytKckxBbkJZN25uUT09)
|
Towards Autonomous and Adaptable Digital Twins
02/02/2024 15:00, Aula Magna, DIAG, Via Ariosto 25, Roma
Speaker: Andrea Matta (Politecnico di Milano)
With the
advent of Industry 4.0, digital representations of products and manufacturing
systems have been considered central for optimizing their development,
production, and delivery phases. Digital twins are not simply conceived as
simulation models of their physical counterpart, differently they are
developed as means for better understanding and control of the real system.
To keep alignment with physical systems along their whole lifecycle, digital
twins need automation for synchronization and model updates. Different
data-driven approaches will be explored for model generation of process flows
and equipment from different data views. The advantages and disadvantages of
these approaches will be discussed to provide a comprehensive understanding.
Additionally, techniques for online validation and synchronization of digital
twins will be presented, ensuring that the digital twin accurately reflects
the physical system in real time. Applications in manufacturing and circular
economies will be described, showcasing their potential to optimize
production processes, reduce waste, and enhance sustainability.
|
|