FEDERICA GRANESE

Dottoressa di ricerca

ciclo: XXXV


relatore: Daniele Gorla, Catuscia Palamidessi

Titolo della tesi: Towards Securing Machine Learning Algorithms

Deep Neural Networks (DNNs) have seen significant advances in recent years and are nowadays widely used in a variety of applications. When it comes to safety-critical systems, developing methods and tools to make these algorithms reliable, particularly for non-specialists who may treat them as “black boxes” with no further checks, constitutes a core challenge. The purpose of this thesis is to investigate various methods that can enable the safe use of these technologies. In the first part, we tackle the problem of identifying whether the prediction of a DNN classifier should (or should not) be trusted so that, consequently, it would be possible to accept or reject it. In this regard, we propose a new detector that approximates the most powerful (Oracle) discriminator based on the probability of classification error with respect to the true class posterior probability. Two scenarios are investigated: Totally Black Box (TBB), where only the soft-predictions are available and Partially Black Box (PBB) where gradient-propagation to perform input pre-processing is allowed. The proposed detector can be applied to any pre-trained model, it does not require prior information about the underlying dataset and is as simple as the simplest available methods in the literature. We address in the second part the problem of \textit{multi-armed adversarial attacks detection}. The detection methods are generally validated by assuming a single implicitly known attack strategy, which does not necessarily account for real-life threats. Indeed, this can lead to an overoptimistic assessment of the detectors’ performance and may induce some bias in comparing competing detection schemes. We propose a novel multi-armed framework for evaluating detectors based on several attack strategies to overcome this limitation. Among them, we make use of three new objectives to generate attacks. The proposed performance metric is based on the worst-case scenario: detection is successful if and only if all different attacks are correctly recognized. Moreover, following this setting, we formally derive a simple yet effective method to aggregate the decisions of multiple trained detectors, possibly provided by a third party. While every single detector tends to underperform or fail at detecting types of attack that it has never seen at training time, our framework successfully aggregates the knowledge of the available detectors to guarantee a robust detection algorithm. The proposed method has many advantages: it is simple as it does not require further training of the given detectors; it is modular, allowing existing (and future) methods to be merged into a single one; it is general since it can simultaneously recognize adversarial examples created according to different algorithms and training (loss) objectives.

Produzione scientifica

11573/1695523 - 2023 - On the (Im)Possibility of Estimating Various Notions of Differential Privacy
Gorla, Daniele; Jalouzot, Louis; Granese, Federica; Palamidessi, Catuscia; Piantanida, Pablo - 04b Atto di convegno in volume
congresso: 24th Italian Conference on Theoretical Computer Science (Palermo (Italiy))
libro: Atti di ICTCS 2023 - ()

11573/1504031 - 2021 - Enhanced models for privacy and utility in continuous-time diffusion networks
Granese, F.; Gorla, D.; Palamidessi, C. - 01a Articolo in rivista
rivista: INTERNATIONAL JOURNAL OF INFORMATION SECURITY (Springer-Verlag New York Incorporated:175 Fifth Avenue:New York, NY 10010:(212)460-1500, EMAIL: orders@springer-ny.com, INTERNET: http://www.springer-ny.com, Fax: (212)533-3503) pp. 763-782 - issn: 1615-5262 - wos: WOS:000604193200001 (1) - scopus: 2-s2.0-85098732893 (1)

11573/1572393 - 2021 - DOCTOR: A Simple Method for Detecting Misclassification Errors
Granese, Federica; Romanelli, Marco; Gorla, Daniele; Palamidessi, Catuscia; Piantanida, Pablo - 04b Atto di convegno in volume
congresso: 35th Conference on Neural Information Processing Systems (NeurIPS 2021) (Sydney (Australia))
libro: Atti di NeurIPS 2021 - ()

11573/1355276 - 2019 - Enhanced models for privacy and utility in continuous-time diffusion networks
Gorla, Daniele; Granese, Federica; Palamidessi, Catuscia - 04b Atto di convegno in volume
congresso: Theoretical Aspects of Computing (Hammammeth; Tunisia)
libro: Theoretical Aspects of Computing – ICTAC 2019 - (978-3-030-32505-3)

© Università degli Studi di Roma "La Sapienza" - Piazzale Aldo Moro 5, 00185 Roma