LetterLiGo / SafeGen_CCS2024
[CCS'24] SafeGen: Mitigating Unsafe Content Generation in Text-to-Image Models
☆129Updated last week
Alternatives and similar repositories for SafeGen_CCS2024:
Users that are interested in SafeGen_CCS2024 are comparing it to the libraries listed below
- [NDSS'24] Inaudible Adversarial Perturbation: Manipulating the Recognition of User Speech in Real Time☆56Updated 6 months ago
- Improving fast adversarial training with prior-guided knowledge (TPAMI2024)☆41Updated 11 months ago
- Code for Semantic-Aligned Adversarial Evolution Triangle for High-Transferability Vision-Language Attack☆33Updated 4 months ago
- Improved techniques for optimization-based jailbreaking on large language models (ICLR2025)☆93Updated 2 months ago
- [CVPR2024] MMA-Diffusion: MultiModal Attack on Diffusion Models☆156Updated 11 months ago
- [NeurIPS 2024] GuardT2I: Defending Text-to-Image Models from Adversarial Prompts☆20Updated 4 months ago
- Revisiting and Exploring Efficient Fast Adversarial Training via LAW: Lipschitz Regularization and Auto Weight Averaging (TIFS2024)☆34Updated 9 months ago
- A curated list of resources dedicated to the safety of Large Vision-Language Models. This repository aligns with our survey titled A Surv…☆70Updated last week
- YiJian-Comunity: a full-process automated large model safety evaluation tool designed for academic research☆108Updated 5 months ago
- [ICLR Workshop 2025] An official source code for paper "FlipAttack: Jailbreak LLMs via Flipping".☆103Updated 3 weeks ago
- Practical Detection of Trojan Neural Networks☆119Updated 4 years ago
- [MM24 Oral] Identity-Driven Multimedia Forgery Detection via Reference Assistance☆97Updated 7 months ago
- A comprehensive collection of resources focused on addressing and understanding hallucination phenomena in MLLMs.☆34Updated 10 months ago
- [ICML22] "Revisiting and Advancing Fast Adversarial Training through the Lens of Bi-level Optimization" by Yihua Zhang*, Guanhua Zhang*, …☆65Updated 2 years ago
- ☆110Updated 3 weeks ago
- ☆19Updated 3 weeks ago
- A Survey on Jailbreak Attacks and Defenses against Multimodal Generative Models☆149Updated last week
- Attack classification models with transferability, black-box attack; unrestricted adversarial attacks on imagenet, CVPR2021 安全AI挑战者计划第六期:…☆48Updated 3 years ago
- A collection of resources on attacks and defenses targeting text-to-image diffusion models☆62Updated last week
- This is the code repository of our submission: Understanding the Dark Side of LLMs’ Intrinsic Self-Correction.☆55Updated 3 months ago
- Code for ACL 2024 long paper: Are AI-Generated Text Detectors Robust to Adversarial Perturbations?☆27Updated 8 months ago
- The code for ACM MM2024 (Multimodal Unlearnable Examples: Protecting Data against Multimodal Contrastive Learning)☆11Updated 8 months ago
- ☆73Updated last month
- ☆31Updated 8 months ago
- Official Code for ART: Automatic Red-teaming for Text-to-Image Models to Protect Benign Users (NeurIPS 2024)☆14Updated 5 months ago
- CVPR 2022 Workshop Robust Classification☆78Updated 2 years ago
- ☆10Updated last month
- ☆193Updated 3 weeks ago
- [USENIX Security '24] Dataset associated with real-world malicious LLM applications, including 45 malicious prompts for generating malici…☆60Updated 5 months ago
- ☆12Updated last month