Decision trees and tree ensembles are popular classification models for tabular data. Similar to other machine learning models, however, they are susceptible to evasion attacks at test time, where adversarial perturbations might transform inputs so that they get misclassified. In this talk, I present some recent results on the security verification of tree-based classifiers. First, I introduce a new security measure called resilience, which mitigates some of the issues of the traditional robustness measure, and I discuss how resilience can be soundly estimated for tree-based classifiers. Then, I introduce a new paradigm called "verifiable learning", which advocates the adoption of new training algorithms designed to learn models which are easy to verify. In particular, I present a new class of tree ensembles admitting security verification in polynomial time, thus escaping from classic NP-hardness results, and I discuss how such models can be trained and efficiently verified.
09/05/2023