Lorenzo Cazzaro

whoami.jpeg

As of April 2025, I am a postdoctoral researcher at Università Ca’ Foscari Venezia under the supervision of prof. Stefano Calzavara and prof. Salvatore Orlando. I also spent a period at CISPA as an intern under the supervision of prof. Giancarlo Pellegrino.

I earned my Ph.D. in Computer Science with honors from Università Ca’ Foscari Venezia in March 2025. I also obtained a Master’s Degree in Computer Science in July 2021 (Best Master Thesis in Computer Science for the a.y. 2020/21 at Università Ca’ Foscari Venezia and a finalist for the Best Master Thesis Awards on Big Data & Data Science 2022 of the 1st Italian Conference on Big Data and Data Science) and a Bachelor’s Degree in Computer Science in November 2019 (Best first-year student of the Bachelor’s Degree program).

You can find my DBLP page here.

My research activities focus on Machine Learning (ML) Security, in particular the verification of security properties of ML models.

Moreover, I am also interested in the following research topics, and I am currently working on them:

  • Adversarial Machine Learning.
  • Applications of Artificial Intelligence (AI) algorithms in Cybersecurity.
  • Machine Learning models watermarking and data exfiltration.

If you are interested in some of the topics on which I’m working or on some of my publications or you would simply like to contact me, the best way to reach me is by email lorenzo.cazzaro@unive.it or Twitter!

I am always looking for motivated students who enjoy working on the research topics I’m interested in! If you want to discuss details about possible topics for a Bachelor’s or Master’s thesis, feel free to email me!

news

Apr 23, 2025 Our tutorial “Towards Adversarially Robust ML in The Age of The AI Act” has been accepted at the 28th European Conference on Artificial Intelligence in Bologna, Italy. Antonio Emanuele Cinà and me will provide an overview about recent advancements on security of Machine Learning and methods to test and verify the robustness of Machine Learning models, with a specific focus on the compliance with the European AI Act.
Apr 19, 2025 Our paper Less is More: Boosting Coverage of Web Crawling through Adversarial Multi-Armed Bandit has been accepted at DSN 2025! In this work, we propose a new state-agnostic Reinforcement Learning-based crawler that applies Adversarial Multi-Armed Bandit in a smart way to improve the exploration of web applications, surpassing the limitations of previous crawlers based on Reinforcement Learning. Available soon!
Dec 12, 2024 Our paper Watermarking Decision Tree Ensembles has been accepted at EDBT 2025! In this work, we propose the first watermarking scheme for decision tree ensembles and we analyze its security against relevant threats.
Dec 12, 2024 Our paper Timber! Poisoning Decision Trees has been accepted at IEEE SaTML 2025! In this work, we present a new poisoning attack about decision trees that is feasible to perform without incurring in large computational costs.
Nov 4, 2024 I have been selected as a Top Reviewer at the Thirty-eighth Annual Conference on Neural Information Processing Systems (NeurIPS 2024) and the 17th ACM Workshop on Artificial Intelligence and Security (AISec 2024)!