Introducing the SprintML Lab
We are the SprintML lab with a research focus on Secure, Private, Robust, INterpretable, and Trustworthy Machine Learning. The lab is jointly led by Professors Adam Dziedzic & Franziska Boenisch. We are located at the CISPA Helmholtz Center for Information Security in Saarbrücken, Germany. Get to know our team and find out about our latest research.
Join SprintML!
We are currently hiring Ph.D. students, Postdocs, and Research Interns with a research focus in one or multiple of the following areas:
- Secure and Robust Machine Learning
- Privacy-Preserving Machine Learning
- Distributed and Federated Learning
- Machine Learning Model Confidentiality
- Trustworthy Language Processing
If you are interested in working with us, please check our open positions.
Past Updates
News
- April 20 2026: Our most successful ICLR ever — 5 accepted papers at ICLR'26! Natural identifiers for privacy and data audits in LLMs, data provenance for image auto-regressive generation, curation leaks: membership inference attacks against data curation, SERUM: simple, efficient, robust, and unifying marking for diffusion-based image generation, and benchmarking empirical privacy protection for LLM adaptations (ORAL)!
- November 09 2025: Excited to announce three papers accepted to AAAI'26! Our work on mitigating unsafe text in image generative models, demystifying foreground and background memorization in diffusion models, and our graph stealing paper which received an ORAL presentation!
- September 18 2025: Excited to share three papers accepted to NeurIPS'25 on watermarking bitwise autoregressive models (BitMark), memorization in GNNs, and strong membership inference attacks on massive datasets and large language models!