Uses of machine learning in security typically focus on building classifiers to determine if certain objects (behaviors, programs, network flows, etc.) are malicious, i.e., representative of an attack. Despite early successes, the rise of adversarial machine learning has called into question the robustness of many such approaches. In adversarial machine learning, an attacker learns how to generate a malicious object in such a way that it gets classified as benign by a target classifier.
My talk will discuss adversarial machine learning in the domain of JavaScript exploits. I will review existing classifiers for the detection of JavaScript exploits, and outline techniques to obfuscate exploits automatically so that they no longer get identified. Our work shows that, in order to achieve JavaScript exploit detectors that are robust to adversarial samples, much still needs to be done.
09/07/2019
Time: 14:00
Venue: Aula Alfa, Ground Floor, Dipartimento di Informatica, Via Salaria 113, 00198
Bio: Lorenzo De Carli’s research interests focus on the security of the web, and the security of clouds and internet-of-things devices. His contributions include hardware accelerators for packet inspection and forwarding, parallelization strategies for intrusion detection, and analysis of malware communications. Lorenzo received a B.Sc. (2004) and a M.Sc. (2007) in Computer Engineering from Politecnico di Torino, Italy, and a M.Sc. (2010) and Ph.D. (2016) in Computer Science from the University of Wisconsin-Madison.