chiayi-hsu / Ring-A-BellView external linksLinks
☆38Jan 15, 2025Updated last year
Alternatives and similar repositories for Ring-A-Bell
Users that are interested in Ring-A-Bell are comparing it to the libraries listed below
Sorting:
- ☆23Feb 5, 2026Updated last week
- The official implementation of ECCV'24 paper "To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Uns…☆87Feb 28, 2025Updated 11 months ago
- [CVPR23W] "A Pilot Study of Query-Free Adversarial Attack against Stable Diffusion" by Haomin Zhuang, Yihua Zhang and Sijia Liu☆26Aug 27, 2024Updated last year
- Official Implementation of Safe Latent Diffusion for Text2Image☆94Apr 21, 2023Updated 2 years ago
- [ICLR 2025] SAFREE: Training-Free and Adaptive Guard for Safe Text-to-Image and Video Generation☆53Jan 22, 2025Updated last year
- Divide-and-Conquer Attack: Harnessing the Power of LLM to Bypass the Censorship of Text-to-Image Generation Mode☆18Feb 16, 2025Updated 11 months ago
- [CVPR 2025] Six-CD: Benchmarking Concept Removals for Benign Text-to-image Diffusion Models☆16Jan 8, 2026Updated last month
- [CVPR 2024] Self-Discovering Interpretable Diffusion Latent Directions for Responsible Text-to-Image Generation☆47May 14, 2024Updated last year
- Unified Concept Editing in Diffusion Models☆183Dec 7, 2025Updated 2 months ago
- List of T2I safety papers, updated daily, welcome to discuss using Discussions☆67Aug 12, 2024Updated last year
- Towards Memorization-Free Diffusion Models (CVPR2024) Codebase☆12Jun 2, 2024Updated last year
- ☆47Jul 14, 2024Updated last year
- EraseDiff: Erasing Data Influence in Diffusion Models☆14Nov 20, 2024Updated last year
- Official repository for Targeted Unlearning with Single Layer Unlearning Gradient (SLUG), ICML 2025☆15Aug 10, 2025Updated 6 months ago
- [CVPR2024] MMA-Diffusion: MultiModal Attack on Diffusion Models☆383Jan 8, 2026Updated last month
- ☆13Jan 14, 2026Updated last month
- Official implementation of NeurIPS'24 paper "Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Model…☆49Nov 4, 2024Updated last year
- A collection of resources on attacks and defenses targeting text-to-image diffusion models☆90Dec 20, 2025Updated last month
- [CVPR 2025] Official PyTorch Implementation for GLoCE: Localized Concept Erasure for Text-to-Image Diffusion Models Using Training-Free G…☆19Jul 10, 2025Updated 7 months ago
- Concept Learning Dynamics☆16Oct 29, 2024Updated last year
- [ICLR2025] Detecting Backdoor Samples in Contrastive Language Image Pretraining☆19Feb 26, 2025Updated 11 months ago
- ☆35May 22, 2024Updated last year
- ☆197Apr 7, 2025Updated 10 months ago
- [BMVC2024] Erasing Concepts from Text-to-Image Diffusion Models with Few-shot Unlearning☆14Sep 15, 2025Updated 5 months ago
- ☆65Sep 29, 2024Updated last year
- This is the repository for USENIX Security 2023 paper "Hard-label Black-box Universal Adversarial Patch Attack".☆15Sep 5, 2023Updated 2 years ago
- ☆15Dec 12, 2022Updated 3 years ago
- ☆40Jun 1, 2023Updated 2 years ago
- An interactive attention visualization and intervention tool for LLM Decode Stage.☆43Jan 6, 2026Updated last month
- Official repository for "On the Multi-modal Vulnerability of Diffusion Models"☆16Jul 15, 2024Updated last year
- ☆16Feb 23, 2025Updated 11 months ago
- Official Code for ART: Automatic Red-teaming for Text-to-Image Models to Protect Benign Users (NeurIPS 2024)☆23Oct 23, 2024Updated last year
- [CVPR 2024] "MACE: Mass Concept Erasure in Diffusion Models" (Official Implementation)☆393Jun 2, 2025Updated 8 months ago
- Universal Adversarial Perturbations for Vision-Language Pre-trained Models☆24Aug 8, 2025Updated 6 months ago
- [ECCV 2024] Reliable and Efficient Concept Erasure of Text-to-Image Diffusion Models☆83Oct 29, 2024Updated last year
- [MM '24] EvilEdit: Backdooring Text-to-Image Diffusion Models in One Second☆28Nov 19, 2024Updated last year
- [ICLR24 (Spotlight)] "SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation…☆141May 27, 2025Updated 8 months ago
- ☆27Jan 23, 2024Updated 2 years ago
- This is the official code for the paper "Lazy Safety Alignment for Large Language Models against Harmful Fine-tuning" (NeurIPS2024)☆25Sep 10, 2024Updated last year