5 th International Workshop on
Designing and Measuring Security in Systems with AI
July 10th, 2026 — Lisbon
co-located with the 11th IEEE European Symposium on Security and Privacy (EuroS&P 2026)
Photo: https://pixabay.com (License: CC BY 2.0 )

Call for Papers

  • Important dates. All deadlines are Anywhere on Earth (AoE = UTC-12h):

    • Workshop paper submission: March 12, 2026
    • Workshop acceptance notification: April 10, 2026
  • Scope of papers. We invite the following types of papers (page limits exclude well-marked references and appendix):

    • Extended abstracts for a poster session (maximum of 2 pages ) that describe ongoing ideas and work in progress and would benefit from quick feedback from the research community.
    • Original research papers (maximum of 6 pages ) that describe novel contributions, report on experimental results, or present industry experiences such as case or field studies.
    • Position and open problem papers (maximum of 6 pages ) discussing promising preliminary experimental results, approaches, ideas, or challenging issues for application in industry; future perspectives and roadmap papers; and “Systematization of knowledge” papers which provide a comprehensive view of the state-of-the-art on the workshop topics.
  • Handling the use of generative AI. Since there will be no formal workshop proceedings and the focus is placed on research talks, we see a low risk of AI-generated submissions. Should the organizers or the PC encounter any suspicious submissions, they will be rejected. The organizers will oversee the review process and ensure high-quality feedback is provided by all reviewers. If an author or co-reviewer reports a suspicious review, one of the workshop chairs will check the review in question and, if needed, provide an additional review.

  • Paper topics. Topics of interest include, but are not limited to, the following areas:

    (a) Applications of AI for enhancing security

    • AI for security requirements engineering, secure coding and application security guidelines
    • AI for assessing security design and threat modeling documents, and planned mitigations
    • AI for aiding security code review, securing source code, and processing documentation
    • AI for SAST, DAST, penetration testing, application and container security testing
    • AI for incident response planning and execution

    (b) Modeling security for AI-augmented systems

    • Approaches to secure software architecture
    • Security risk assessment and analysis
    • Security risk management
    • Threat, attack, intrusion and defense modeling
    • Challenges with modeling or integrating legacy systems with AI components

    (c) Enforcing security for AI-augmented systems

    • Preventing AI misuse and AI benchmarking
    • Enforcing security between design and implementation
    • Enforcing security between implementation and runtime
    • Developing attacks and defenses

    (d) Measuring security for AI-augmented systems

    • Metrics and measurement approaches
    • Security, trust and privacy metrics
    • Measurement systems and associated data gathering
    • Security trade-off analyses
    • Assurance and security certification methods
    • Devtime and runtime security measurements
    • Visualization approaches for security measurements
    • Human aspects and diversity effects

Committee

Workshop Chairs

Steering Committee