Since the early 2000's, there has been an extremely fruitful flow of ideas back and forth between theoretical computer science and the areas of pure mathematics known as additive and arithmetic combinatorics. The late Luca Trevisan (1971-2024, La Sapienza alumnus) was one of the pioneers of this exchange, initiating a line of work that sought complexity-theoretic analogues of results arising in the study of arithmetic progressions in subsets of the integers (such as the Dense Model Theorem of Green, Tao, and Ziegler, 2004 & 2006).
In this talk, I will describe how recent work, emerging from the algorithmic fairness literature, has further advanced Trevisan's vision. Specifically, as pointed out by Dwork, Lee, Lin, and Tankala (2023), the Multicalibration Theorem of Hébert-Johnson, Kim, Reingold, and Rothblum (2018) can be viewed as a complexity-theoretic analogue of Szemerédi's Regularity Lemma for graphs (1971). It decomposes every boolean function into a combination of a "low-complexity" function and a few "random" functions. We have found that this decomposition is a very powerful complexity-theoretic tool. In particular, it gives simple proofs of numerous old and new results in average-case complexity, pseudorandomness, and cryptography.
Joint works with Sílvia Casacuberta, Cynthia Dwork, Lunjia Hu, Cassandra Marcussen, Louie Putterman, Omer Reingold, Luca Trevisan, and Madhur Tulsiani.
20/02/2026
Salil Vadhan is the Vicky Joseph Professor of Computer Science and Applied Mathematics at the Harvard John A. Paulson School of Engineering & Applied Sciences, and a Faculty Director of the OpenDP open-source software project. Vadhan’s research in theoretical computer science spans computational complexity, data privacy, and cryptography. He is a fellow of the American Academy of Arts & Sciences and of the Association for Computing Machinery, and his recognitions include a Gödel Prize, a Simons Investigator Award, and a Guggenheim Fellowship.