Bernt Schiele - Inherent Interpretability for Deep Learning in Computer Vision


Computer Vision has been revolutionized by Machine Learning and in particular Deep Learning. End-to-end trainable models allow to achieve top performance across a wide range of computer vision tasks and settings. While recent progress is remarkable, current deep learning models are hard to interpret. In this talk, I will discuss a new class of neural networks that are performant image classifiers with a high degree of inherent interpretability. In particular, these novel networks perform classification through a series of input-dependent linear transformations, that outperform existing attribution methods both quantitatively as well as qualitatively.

29/09/2025

Short bio:
Bernt Schiele has been Max Planck Director at MPI for Informatics and Professor at Saarland University since 2010. In 1994 he worked in the field of multi-modal human-computer interfaces at Carnegie Mellon University, Pittsburgh, PA, USA. In 1997 he obtained his PhD from INP Grenoble, France in the field of computer vision. Between 1997 and 2000 he was postdoctoral associate and Visiting Assistant Professor at the Massachusetts Institute of Technology, Cambridge, MA, USA. From 1999 until 2004 he was Assistant Professor at the Swiss Federal Institute of Technology in Zurich (ETH Zurich). Between 2004 and 2010 he was Full Professor at the computer science department of TU Darmstadt. He is fellow of ACM, IEEE, ELLIS and IAPR.



© Università degli Studi di Roma "La Sapienza" - Piazzale Aldo Moro 5, 00185 Roma