Intelligenza Artificiale e Diritto: Prospettive e Problemi Aperti
19/12/2022 16:00, Aula Magna, DIAG, Via Ariosto 25, Roma
Grazie all’Intelligenza Artificiale (IA), attività fino ad oggi svolte esclusivamente dalle persone possono essere affidate alle macchine, che hanno acquisito alcune capacità di ragionare, apprendere e agire. I successi scientifici e tecnologici dell’IA sollevano fondamentali interrogativi sociali, etici e giuridici. Ci chiediamo se le tecnologie dell’IA potranno essere controllate e dirette verso il bene degli individui e della società, o saranno invece rivolte a interessi particolari, a danno di diritti individuali e valori sociali; se consentiranno di perfezionare le nostre istituzioni o finiranno per travolgerle; se ci potranno aiutare a creare e applicare il diritto secondo razionalità e giustizia, o se invece contribuiranno a che il diritto divenga più rigido, opaco e iniquo. L’incontro odierno ci aiuta a rispondere a queste domande, sia attraverso interventi di esperti delle diverse aree disciplinari coinvolte (diritto, informatica, ingegneria, filosofia,...), sia discutendo con l'autore i contenuti del recentissimo volume "L'intelligenza artificiale e il diritto" (Giovanni Sartor, Giappichelli, 2022).
|
Can We Trust Machine Learning Models?
14/12/2022 15:00, Aula 201, Palazzina D, Viale Regina Elena 295, Roma
Speaker: Vitaly Shmatikov (Cornell Tech)
Modern machine learning models achieve super-human accuracy on tasks such as image classification and natural-language generation, but accuracy does not tell the entire story of what these models are learning. In this talk, I will look at today's machine learning from a security and privacy perspective, and ask several fundamental questions. Could models trained on sensitive private data memorize and leak this data? When training involves crowd-sourced data, untrusted users, or third-party code, could models learn malicious functionality, causing them to produce incorrect or biased outputs? What damage could result from such compromised models?
I will illustrate these vulnerabilities with concrete examples and discuss the benefits and tradeoffs of technologies (such as federated learning) that promise to protect the integrity and privacy of machine learning models and their training data. I will then outline practical approaches towards making trusted machine learning a reality.
|
The Persistent Problem of Software Insecurity
28/11/2022 15:00, Aula Magna, DIAG, Via Ariosto 25, Roma
Speaker: Elisa Bertino (Purdue University)
Software is increasingly playing a key role in all infrastructure and application domains we may think of. Unfortunately, as we all know, software systems are still often insecure, despite the fact the “problem of software security” had been known to the industry and research communities for decades. In this talk, I'll first present results about different analyses that we have carried out about authentication vulnerabilities in mobile applications, including an extensive study to detect vulnerable implementations of pseudo-random number generator (PRNG) in mobile apps. The study has been carried out using an analysis tool, OTP-Lint that assesses implementations of the PRNGs in an automated manner without requiring the source code. By analyzing 6,431 commercial apps downloaded from two well-known apps market, OTP-Lint identified 399 vulnerable apps that generate predictable OTP values. I'll then discuss other factors that today complicate the problem of software security - a notable factor being the software supply chain. We then discuss "what it takes" to convince all parties involved in the software ecosystem to address the problem of software insecurity and outline research directions.
|
Better Together: Combining Sketching and Sampling for Effective Stream Processing
17/11/2022 11:00, Aula Magna, DIAG, Via Ariosto 25, Roma
Speaker: Prof. Roy Friedman, Technion.
Abstract: Monitoring large data streams and maintaining statistics about them is a challenging task, revolving around the tradeoff triangle between memory frugality, computational complexity, and accuracy. The two common approaches for addressing these problems are sketching and sampling. In this talk, I will present a couple of examples of how an effective combination of the two can yield better results than either of them.
The first example is NitroSketch, a generic framework that boosts the performance of all sketches that employ multiple counter arrays, including, e.g., the famous count-min sketch, count-sketch, and Univmon. NitroSketch systematically addresses the performance bottlenecks of sketches without sacrificing robustness and generality. Its key contribution is the careful synthesis of rigorous, yet practical solutions to reduce the number of per-packet CPU and memory operations. NitroSketch is implemented on three popular software switch platforms (Open vSwitch-DPDK, FD.io-VPP, and BESS). Our performance evaluation shows that accuracy is comparable to unmodified sketches while attaining up to two orders of
magnitude speedup, and up to 45% reduction in CPU usage.
The second example is SQUAD, a novel algorithm for tracking quantiles (e.g., tail latencies) of significant items within a stream, where an item can be the source IP + destination IP addresses in a networking application, a URI or a user ID in a web service, or an object ID in a key-value store. While quantile sketches have been studied in the past, naively applying one instance of such sketches to each item is very memory wasteful. Similarly, applying sampling alone also requires prohibitive amounts of memory. In contrast, SQUAD addresses this problem by combining sampling and sketching in a way that improves the asymptotic space complexity. Intuitively, SQUAD allocates a sketch only to items identified as likely to be significant and uses a background sampling process to capture the behavior of the quantiles of an item before it is allocated with a sketch. This allows SQUAD to use fewer samples and sketches. An empirical evaluation demonstrates SQUAD’s superiority using extensive simulations on real-world traces.
* Based on joint works with Ran Ben-Basat, Vladimir Braverman, Gil Einziger, Yaron Kassner, Zaoxing Liu, Vyas Sekar, and Rana Shahout
|
Secure Biometric Authentication Using Privacy-Preserving Cryptographic Protocols
17/11/2022, B222 @DIAG (Via Ariosto 25), or online
Speaker: Paolo Gasti, New York Institute of Technology (NYIT)
As an authentication method, biometrics offer unparalleled convenience and security. With very little for users to remember and do, there is also very little that they can do incorrectly, thus limiting the attack surface of an authentication system. Unfortunately, biometrics also present a challenging privacy/security tradeoff: biometric data is the ultimate personally identifiable information (PII), and is highly regulated in various jurisdiction in Europe, Asia, and the United States. As a result, practical large-scale biometric deployments must take into account strong protection of the data they process. This talk will present recent advances in the area of cryptographic protocol applied to biometric recognition for the purpose of protecting biometric data during and after authentication. We will introduce various concepts around biometric authentication, such as biometric liveness and authentication error rates, and provide a general overview of modern cryptographic techniques designed to guarantee strong biometric privacy.
|