BIAGIO LA ROSA

Dottore di ricerca

ciclo: XXXVI


supervisore: Roberto Capobianco

Titolo della tesi: Explaining Deep Neural Networks by Leveraging Intrinsic Methods

Deep neural networks have been pivotal in driving AI advancements over the past decade, revolutionizing domains such as gaming, biology, autonomous systems, and voice and text assistants. Despite their impact, these networks are often regarded as black-box models due to their intricate structures and the absence of explanations for their decisions. This opacity poses a significant challenge to AI systems' wider adoption and trustworthiness. This thesis addresses this issue by contributing to the field of eXplainable AI, focusing on enhancing the interpretability of deep neural networks. The core contributions lie in introducing novel techniques aimed at making these networks more interpretable by leveraging an analysis of their inner workings. Specifically, the contributions are threefold. Firstly, the thesis introduces designs for self-explanatory deep neural networks, such as the integration of external memory for interpretability purposes and the usage of prototype and constraint-based layers across several domains. These proposed architectures are specifically designed to preserve most of the black-box networks, thereby maintaining or improving their performance. Secondly, this research delves into novel investigations on neurons within trained deep neural networks, shedding light on overlooked phenomena related to their activation values. Lastly, the thesis conducts an analysis of the application of explanatory techniques in the field of visual analytics, exploring the maturity of their adoption and the potential of these systems to convey explanations to users effectively. In summary, this thesis contributes to the growing field of Explainable AI by proposing intrinsic techniques to enhance the interpretability of deep neural networks. By mitigating the opacity issue of deep neural networks and applying them to several different applications, the research aims to foster trust in AI systems and facilitate their wider adoption across diverse applications.

Produzione scientifica

11573/1662244 - 2023 - A self-interpretable module for deep image classification on small data
La Rosa, B; Capobianco, R; Nardi, D - 01a Articolo in rivista
rivista: APPLIED INTELLIGENCE (-DORDRECHT, NETHERLANDS: SPRINGER -London ; Dordrecht ; Boston : Kluwer Academic Publishers) pp. 9115-9147 - issn: 0924-669X - wos: WOS:000836106300001 (5) - scopus: 2-s2.0-85135404189 (9)

11573/1666704 - 2023 - State of the Art of Visual Analytics for eXplainable Deep Learning
La Rosa, Biagio; Blasilli, Graziano; Bourqui, Romain; Auber, David; Santucci, Giuseppe; Capobianco, Roberto; Bertini, Enrico; Giot, Romain; Angelini, Marco - 01a Articolo in rivista
rivista: COMPUTER GRAPHICS FORUM (Oxford: Blackwell Publishers.) pp. 319-355 - issn: 1467-8659 - wos: WOS:000928775300001 (26) - scopus: 2-s2.0-85147373343 (34)

11573/1707639 - 2023 - Towards a fuller understanding of neurons with Clustered Compositional Explanations
La Rosa, Biagio; Gilpin, Leilani H.; Capobianco, Roberto - 04b Atto di convegno in volume
congresso: Thirty-seventh Annual Conference on Neural Information Processing Systems (New Orleans; United States of America)
libro: Advances in neural Information processing systems - ()

11573/1691190 - 2023 - Explainable AI in drug discovery: self-interpretable graph neural network for molecular property prediction using concept whitening
Proietti, Michela; Ragno, Alessio; Rosa, Biagio La; Ragno, Rino; Capobianco, Roberto - 01a Articolo in rivista
rivista: MACHINE LEARNING (Springer Nature Hingham, MA: Kluwer Academic Publishers) pp. 2013-2044 - issn: 0885-6125 - wos: WOS:001091343300001 (0) - scopus: 2-s2.0-85175337233 (0)

11573/1624612 - 2022 - A Discussion about Explainable Inference on Sequential Data via Memory-Tracking
La Rosa, B.; Capobianco, R.; Nardi, D. - 04b Atto di convegno in volume
congresso: 2021 International Conference of the Italian Association for Artificial Intelligence, AIxIA 2021 DP (Virtual)
libro: CEUR Workshop Proceedings - ()

11573/1662243 - 2022 - Detection accuracy for evaluating compositional explanations of units
Makinwa, S. M.; La Rosa, B.; Capobianco, R. - 04b Atto di convegno in volume
congresso: 20th International conference of the italian association for artificial intelligence, AIxIA 2021 (Virtual Event)
libro: Lecture notes in computer science - (978-3-031-08420-1; 978-3-031-08421-8)

11573/1662081 - 2022 - Prototype-based Interpretable Graph Neural Networks
Ragno, Alessio; La Rosa, Biagio; Capobianco, Roberto - 01a Articolo in rivista
rivista: IEEE TRANSACTIONS ON ARTIFICIAL INTELLIGENCE (Piscataway NJ: IEEE) pp. 1486-1495 - issn: 2691-4581 - wos: WOS:000893639106029 (1) - scopus: 2-s2.0-85142856655 (3)

11573/1397019 - 2020 - Explainable inference on sequential data via memory-tracking
La Rosa, Biagio; Capobianco, Roberto; Nardi, Daniele - 04c Atto di convegno in rivista
rivista: IJCAI (Harcourt Incorporated:6277 Sea Harbor Drive:Orlando, FL 32887:(800)745-7323, (415)392-2665, Fax: (415)982-2665) pp. 2006-2013 - issn: 1045-0823 - wos: WOS:000764196702019 (4) - scopus: 2-s2.0-85097338762 (7)
congresso: International Joint Conference on Artificial Intelligence (Yokohama; Japan)

© Università degli Studi di Roma "La Sapienza" - Piazzale Aldo Moro 5, 00185 Roma