Kathrin Grosse is a Research Scientist at IBM Research, Zurich, Switzerland. Her research interests focus on AI security in practice. Her work bridges research (in AI security) and industry needs. She received her master’s degree from Saarland University and her Ph.D. at CISPA Helmholtz Center, Saarland University, in 2021 under the supervision of Michael Backes, followed by a Postdoc with Battista Biggio in Cagliari, Italy and Alexandre Alahi at EPFL, Switzerland. She interned with IBM in 2019 and Disney Research in 2020/21. As part of her work, she serves as a reviewer for IEEE S&P, Usenix Security, and ICML and organizes workshops at ICML and develops patents. In 2019, she was nominated as an AI Newcomer for the German Federal Ministry of Education and Research's Science Year.
In this talk, we will revisit the evidence of vulnerabilities and exploits within the realm of Artificial Intelligence, encompassing both traditional AI and Large Language Models (LLMs). Such vulnerabilities necessitate prevention – which we suggest could be handled by incident reporting, ideally based on a taxonomy that allows the collection of all information needed to understand the incident and trends in exploits. We will discuss our proposal for such a framework covering properties of the underlying AI, relevant security properties of the AI system, and incident specifics and implications.
Luca Demetrio is Assistant Professor at the University of Genoa. He is among the pioneers of offensive security against machine-learning antivirus programs, with his research on adversarial EXEmples. He is associate editor for Pattern Recognition, and he serves as review for top-tier conferences (ICML, ICLR, USENIX, NeurIPS) and journals (T-IFS, COSE).
With the abundance of programs developed everyday, it is possible to develop next-generation antivirus programs that leverage this vast accumulated knowledge. In practice, these technologies are developed with a mixture of established techniques like pattern matching, and machine learning algorithms, both tailored to achieve high detection rate and low false alarms. While companies state the application of both techniques, no rigorous investigation on the interconnection between detection strategies have been properly discussed and evaluated, thus keeping further advancements in the field locked up in secrecy. In this talk, we will venture forth into both pattern-matching and data-based decision-making processes to study how they can be integrated, and how their performances can be tuned to improve their efficacy. Also, we will peek into the world of adversaries that want to sneak through these next-generation antivirus programs, highlighting new challenges as well.
Building secure by design systems has become a corner stone in software development, but with the rapid adoption of AI in virtually every deployed system, methods used for measuring and analyzing security throughout the software development need substantial rethinking. AI and software should be co-designed with security in mind, rather than addressing it separately or as an afterthought. This workshop aims to bridge the secure software design and security for AI research communities by providing a forum for discussing architectural and implementation challenges. This workshop will focus on security for AI-augmented systems, as well as covering the security aspects of AI, especially in real-world scenarios.
Topics of interest include (but are not limited to):
We invite the following types of papers (the page limits exclude well-marked references and appendix):
All papers should be submitted as a PDF file in double-column IEEE Conference Proceedings format (see Overleaf template ). Submissions will go through a double-blind peer-reviewing process aimed at selecting the papers to be presented at the workshop. There will be no formal workshop proceedings . Papers already published in other venues are welcome in all three categories. The accepted papers or slides will be made available to registered attendees on the workshop's online website. Submissions must be in English and properly anonymized.
Important: The use of generative AI for authors and reviewers is not allowed. Should we (or our PC) encounter any suspicious submissions, they will be rejected. The organizers will oversee the review process and ensure high-quality feedback is provided by all reviewers. If we encounter an author or a co-reviewer reporting a suspicious review, one of the workshop chairs will check the review in question and if needed provide an additional review.
Submission link: https://easychair.org/my/conference?conf=demessai25 .
All accepted submissions will be presented at the workshop as posters. Accepted papers will be selected for presentation as spotlights based on their review score and novelty. Nonetheless, all accepted papers should be considered as having equal importance.
One author of each accepted paper is required to attend the workshop and present the paper.
For any questions, please contact one the workshop organizers at k.tuma@vu.nl , maura.pintor@unica.it and jamal.el-hachem@univ-ubs.fr .