Thesis title: Towards Trustworthy Graph Neural Networks
This thesis aims to develop and deploy deep learning algorithms for graph-structured data people trust for real-world applications. To be trusted from a human perspective requires robustness from malign input manipulation, privacy protection of the ingested data, fair treatment for every individual or group of individuals in the data and transparency in the decision process. However, the ``vanilla'' formulation of graph neural networks(GNNs) do not respect all these characteristics. We address these issues one by one through a series of different works. First, we propose solutions to create robust architectures for the dataset at hand and fully distributed GNNs to preserve data privacy. We put an accent on the fairness of the predictions by removing the bias directly from the data. In particular, the tendency of similar nodes to cluster on several real-world graphs (i.e., homophily) can dramatically worsen the fairness of these procedures. First, we propose a biased pruning of the graph connections to reduce the homophily of the sensitive traits. Secondly, instead of dropping edges at random, we learn a new and fairer version of the graph's topology. Finally, we tried associating additional information with the GNNs' predictions to allow human experts to interpret and extract knowledge from the model. We develop a meta-learning framework for improving the level of explainability of a GNN at training time by steering the optimization process towards an ``interpretable'' local minima. Then, we propose an architecture trained over a bag of explanation subgraphs used to improve prediction performances and constitute an easy-to-interpret explanation. To conclude, we present a real user case in industrial settings where GNN can be combined with Convolutional Neural Networks to re-identify objects from aerial photos, potentially improving the quality of service of millions of people.