LetterLiGo / SafeGen_CCS2024
[CCS'24] SafeGen: Mitigating Unsafe Content Generation in Text-to-Image Models
☆129Updated last month
Alternatives and similar repositories for SafeGen_CCS2024
Users that are interested in SafeGen_CCS2024 are comparing it to the libraries listed below
Sorting:
- [NDSS'24] Inaudible Adversarial Perturbation: Manipulating the Recognition of User Speech in Real Time☆55Updated 7 months ago
- Code for Semantic-Aligned Adversarial Evolution Triangle for High-Transferability Vision-Language Attack☆33Updated 6 months ago
- Improving fast adversarial training with prior-guided knowledge (TPAMI2024)☆41Updated last year
- Improved techniques for optimization-based jailbreaking on large language models (ICLR2025)☆99Updated last month
- [CVPR2024] MMA-Diffusion: MultiModal Attack on Diffusion Models☆159Updated last year
- ☆119Updated this week
- Revisiting and Exploring Efficient Fast Adversarial Training via LAW: Lipschitz Regularization and Auto Weight Averaging (TIFS2024)☆34Updated 11 months ago
- A curated list of resources dedicated to the safety of Large Vision-Language Models. This repository aligns with our survey titled A Surv…☆94Updated 2 weeks ago
- YiJian-Comunity: a full-process automated large model safety evaluation tool designed for academic research☆110Updated 7 months ago
- [MM24 Oral] Identity-Driven Multimedia Forgery Detection via Reference Assistance☆102Updated last month
- A comprehensive collection of resources focused on addressing and understanding hallucination phenomena in MLLMs.☆34Updated last year
- [ICML 2025] An official source code for paper "FlipAttack: Jailbreak LLMs via Flipping".☆114Updated 2 weeks ago
- Practical Detection of Trojan Neural Networks☆119Updated 4 years ago
- [NeurIPS 2024] GuardT2I: Defending Text-to-Image Models from Adversarial Prompts☆51Updated this week
- ☆29Updated 2 months ago
- [ICML22] "Revisiting and Advancing Fast Adversarial Training through the Lens of Bi-level Optimization" by Yihua Zhang*, Guanhua Zhang*, …☆65Updated 2 years ago
- The official implementation of paper "Invisible Backdoor Attack against Self-supervised Learning"☆11Updated 3 weeks ago
- ☆67Updated 4 months ago
- AISafetyLab: A comprehensive framework covering safety attack, defense, evaluation and paper list.☆162Updated last week
- ☆17Updated last year
- A collection of papers related to knowledge fusion☆54Updated 7 months ago
- [USENIX Security '24] Dataset associated with real-world malicious LLM applications, including 45 malicious prompts for generating malici…☆61Updated 7 months ago
- Attack classification models with transferability, black-box attack; unrestricted adversarial attacks on imagenet, CVPR2021 安全AI挑战者计划第六期:…☆48Updated 3 years ago
- CVPR 2022 Workshop Robust Classification☆78Updated 2 years ago
- Machine-generated text detection in the wild (ACL 2024)☆197Updated 2 months ago
- ☆73Updated 3 months ago
- [ICLR 2025] Improving Data Efficiency via Curating LLM-Driven Rating Systems☆93Updated last month
- [ICLR 2023] Official Tensorflow implementation of "Distributionally Robust Post-hoc Classifiers under Prior Shifts"☆34Updated last year
- Code for ACL 2024 long paper: Are AI-Generated Text Detectors Robust to Adversarial Perturbations?☆28Updated 10 months ago
- ☆33Updated 4 months ago