CSIPlab / SLUGLinks
Official repository for Targeted Unlearning with Single Layer Unlearning Gradient (SLUG), ICML 2025
☆14Updated 5 months ago
Alternatives and similar repositories for SLUG
Users that are interested in SLUG are comparing it to the libraries listed below
Sorting:
- [NeurIPS24] "What makes unlearning hard and what to do about it" [NeurIPS24] "Scalability of memorization-based machine unlearning"☆21Updated 8 months ago
- Official implementation of NeurIPS'24 paper "Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Model…☆49Updated last year
- ☆38Updated last year
- The official implementation of ECCV'24 paper "To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Uns…☆86Updated 11 months ago
- [CVPR 2023] Adversarial Robustness via Random Projection Filters☆13Updated 2 years ago
- Towards Defending against Adversarial Examples via Attack-Invariant Features☆12Updated 2 years ago
- Implementation of BadCLIP https://arxiv.org/pdf/2311.16194.pdf☆23Updated last year
- [ECCV-2024] Transferable Targeted Adversarial Attack, CLIP models, Generative adversarial network, Multi-target attacks☆38Updated 9 months ago
- Code Repo for the NeurIPS 2023 paper "VillanDiffusion: A Unified Backdoor Attack Framework for Diffusion Models"☆27Updated 4 months ago
- Code repository for CVPR2024 paper 《Pre-trained Model Guided Fine-Tuning for Zero-Shot Adversarial Robustness》☆25Updated last year
- [CVPR 2023] Backdoor Defense via Adaptively Splitting Poisoned Dataset☆49Updated last year
- [BMVC 2023] Semantic Adversarial Attacks via Diffusion Models☆24Updated 2 years ago
- ☆30Updated last year
- [SatML 2024] Shake to Leak: Fine-tuning Diffusion Models Can Amplify the Generative Privacy Risk☆16Updated 10 months ago
- ☆23Updated 2 years ago
- An Embarrassingly Simple Backdoor Attack on Self-supervised Learning☆20Updated 2 years ago
- List of T2I safety papers, updated daily, welcome to discuss using Discussions☆67Updated last year
- ☆45Updated 2 years ago
- 🛡️[ICLR'2024] Toward effective protection against diffusion-based mimicry through score distillation, a.k.a SDS-Attack☆59Updated last year
- [CVPR2025] Official Repository for IMMUNE: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment☆26Updated 7 months ago
- [CVPR23W] "A Pilot Study of Query-Free Adversarial Attack against Stable Diffusion" by Haomin Zhuang, Yihua Zhang and Sijia Liu☆26Updated last year
- This is the official repo of the paper "Latent Guard: a Safety Framework for Text-to-image Generation"☆52Updated last year
- [ICLR 2024] Inducing High Energy-Latency of Large Vision-Language Models with Verbose Images☆42Updated 2 years ago
- [CVPR'25]Chain of Attack: On the Robustness of Vision-Language Models Against Transfer-Based Adversarial Attacks☆29Updated 7 months ago
- [CVPR 2024] Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision Transfomers☆16Updated last year
- AdvDiffuser: Natural Adversarial Example Synthesis with Diffusion Models (ICCV 2023)☆19Updated 2 years ago
- [NeurIPS 2022] GAMA: Generative Adversarial Multi-Object Scene Attacks☆19Updated 2 years ago
- ☆33Updated 9 months ago
- PDM-based Purifier☆22Updated last year
- [CVPR'24 Oral] Metacloak: Preventing Unauthorized Subject-driven Text-to-image Diffusion-based Synthesis via Meta-learning☆28Updated last year