• Home
  • Team
  • Open Positions
  • Blog
  • PhDLife

Posts

Natural Identifiers for Privacy and Data Audits

Published: Apr 25, 2026

Benchmarking Empirical Privacy Protection for Adaptations of LLMs

Published: Apr 25, 2026

SERUM: Simple, Efficient, Robust, and Unifying Marking for Diffusion-based Image Generation

Published: Apr 25, 2026

Beautiful Images, Toxic Words: Understanding and Addressing Offensive Text in AI-Generated Images

Published: Mar 8, 2026

BitMark: Watermarking Bitwise Autoregressive Image Generative Models

Published: Nov 30, 2025

Captured by Captions: On Memorization and its Mitigation in Multi-Modal Models

Published: Mar 3, 2025

Image AutoRegressive Models Leak More Training Data Than Diffusion Models

Published: Feb 4, 2025

Private Adaptations of Open LLMs Outperform their Closed Alternatives

Published: Dec 10, 2024

How to prompt LLMs with private data?

Published: Apr 28, 2024

Bucks for Buckets (B4B): Active Defenses Against Stealing Encoders

Published: Dec 10, 2023

On stealing and defending self-supervised models

Published: Feb 23, 2023

How to Keep a Model Stealing Adversary Busy?

Published: Apr 21, 2022

All You Need Is Matplotlib

Published: Apr 17, 2022

Beyond federation: collaborating in ML with confidentiality and privacy

Published: May 1, 2021

© 2024 SprintML Lab