5 th International Workshop on
Designing and Measuring Security in Systems with AI
July 10th, 2026 — Lisbon
co-located with the 11th IEEE European Symposium on Security and Privacy (EuroS&P 2026)
Photo: https://pixabay.com (License: CC BY 2.0 )

Keynotes

Title: TBD

Vera Rimmer, Research Expert @ DistriNet, KU Leuven (Belgium) - click for short bio

Vera Rimmer is research expert at DistriNet, KU Leuven. She specializes in cybersecurity, privacy-enhancing technologies, and the use of data analytics, machine learning, and deep learning for secure and trustworthy AI systems.

Title: TBD

Adriana Sejfia, Assistant Professor @ University of Edinburgh (UK) - click for short bio

Adriana Sejfia is Assistant Professor (Lecturer, in the UK system) at the School of Informatics, University of Edinburgh. Her research is in software engineering, program analysis, and security.


Pannel with keynote speakers

Mehdi Mirakhorli, Associate Professor @ University of Hawaii (US) - click for short bio

Mehdi Mirakhorli is Associate Professor at University of Hawaii at Manoa. His research is in software engineering, trustworthy software, software assurance, cybersecurity, AI, scientific software development, and software enabled sustainable disposal.

Dimitri Van Landuyt, Associate Professor @ KU Leuven (Belgium) - click for short bio

Dimitri Van Landuyt is Associate Professor in Information Systems Engineering at the LIRIS group, and part-time research manager at DistriNet research group, KU Leuven. His research is in security and privacy by design, cloud computing, and technology for economic data spaces.


Maria Mendéz Real, Associate Professor @ University of South Brittany (France) - click for short bio

Maria Mendéz Real is associate professor at the University of South Brittany (France). She's expert in hardware security and machine learning implementation security.


Luca Demetrio, Assistant Professor @ University of Genoa (Italy) - click for short bio

Luca Demetrio is assistant professor at the University of Genoa (Italy). He specializes in machine learning security, especially regarding the malware domain.


Call for Papers

  • Important dates. All deadlines are Anywhere on Earth (AoE = UTC-12h):

    • Workshop paper submission: March 12, 2026
    • Workshop acceptance notification: April 10, 2026
  • Scope of papers. We invite the following types of papers (page limits exclude well-marked references and appendix):

    • Extended abstracts for a poster session (maximum of 2 pages ) that describe ongoing ideas and work in progress and would benefit from quick feedback from the research community.
    • Original research papers (maximum of 6 pages ) that describe novel contributions, report on experimental results, or present industry experiences such as case or field studies.
    • Position and open problem papers (maximum of 6 pages ) discussing promising preliminary experimental results, approaches, ideas, or challenging issues for application in industry; future perspectives and roadmap papers; and “Systematization of knowledge” papers which provide a comprehensive view of the state-of-the-art on the workshop topics.
  • Handling the use of generative AI. Since there will be no formal workshop proceedings and the focus is placed on research talks, we see a low risk of AI-generated submissions. Should the organizers or the PC encounter any suspicious submissions, they will be rejected. The organizers will oversee the review process and ensure high-quality feedback is provided by all reviewers. If an author or co-reviewer reports a suspicious review, one of the workshop chairs will check the review in question and, if needed, provide an additional review.

  • Paper topics. Topics of interest include, but are not limited to, the following areas:

    (a) Applications of AI for enhancing security

    • AI for security requirements engineering, secure coding and application security guidelines
    • AI for assessing security design and threat modeling documents, and planned mitigations
    • AI for aiding security code review, securing source code, and processing documentation
    • AI for SAST, DAST, penetration testing, application and container security testing
    • AI for incident response planning and execution

    (b) Modeling security for AI-augmented systems

    • Approaches to secure software architecture
    • Security risk assessment and analysis
    • Security risk management
    • Threat, attack, intrusion and defense modeling
    • Challenges with modeling or integrating legacy systems with AI components

    (c) Enforcing security for AI-augmented systems

    • Preventing AI misuse and AI benchmarking
    • Enforcing security between design and implementation
    • Enforcing security between implementation and runtime
    • Developing attacks and defenses

    (d) Measuring security for AI-augmented systems

    • Metrics and measurement approaches
    • Security, trust and privacy metrics
    • Measurement systems and associated data gathering
    • Security trade-off analyses
    • Assurance and security certification methods
    • Devtime and runtime security measurements
    • Visualization approaches for security measurements
    • Human aspects and diversity effects

Committee

Workshop Chairs

Steering Committee

Program Committee

  • David Pape (CISPA)
  • Denis Trcek (University of Ljubljana)
  • Dimitri Van Landuyt (KU Leuven)
  • Elena Lisova (MDU, VCE)
  • Emanuele Iannone (Hamburg University of Technology)
  • Giorgio Piras (University of Cagliari)
  • Julien Francq (Naval Group)
  • Mengyuan Zhang (Vrije Universiteit Amsterdam)
  • Muhammad Ali Babar (The University of Adelaide)
  • Phu Nguyen (SINTEF)
  • Riccardo Scandariato (Hamburg University of Technology)
  • Simon Schneider (Hamburg University of Technology)
  • Stjepan Picek (Radboud University)
  • Sven Peldszus (Ruhr University Bochum)
  • Tong Li (Beijing University of Technology)
  • Vianney Lapôtre (Université de Bretagne-Sud)