From Computational Argumentation to Explanation


Computational argumentation is a well-established field in (mostly symbolic) AI focusing on defining argumentation frameworks comprising sets of arguments and dialectical relations between them (e.g. of attack and, in addition or instead, of support), as well as so-called semantics (e.g. amounting to definitions of dialectically acceptable sets of arguments or of dialectical strength of arguments, satisfying desirable dialectical properties such as that supports against an argument should strengthen it). In this talk I will overview our recent efforts towards deploying computational argumentation to obtain and deliver to users explanations of different formats for a variety of systems, including data-driven classifiers. I will also argue that explainable AI (XAI) , which has witnessed unprecedented growth in AI in recent years, can be ideally supported by computational argumentation models whose dialectical nature matches well some basic desirable features of explanatory activities.

09/06/2021



© Università degli Studi di Roma "La Sapienza" - Piazzale Aldo Moro 5, 00185 Roma