LetterLiGo / SafeGen_CCS2024Links
[CCS'24] SafeGen: Mitigating Unsafe Content Generation in Text-to-Image Models
☆134Updated 2 months ago
Alternatives and similar repositories for SafeGen_CCS2024
Users that are interested in SafeGen_CCS2024 are comparing it to the libraries listed below
Sorting:
- [NDSS'24] Inaudible Adversarial Perturbation: Manipulating the Recognition of User Speech in Real Time☆57Updated 11 months ago
- Improving fast adversarial training with prior-guided knowledge (TPAMI2024)☆41Updated last year
- Code for Semantic-Aligned Adversarial Evolution Triangle for High-Transferability Vision-Language Attack(TPAMI 2025)☆37Updated this week
- ACL 2025 (Main) HiddenDetect: Detecting Jailbreak Attacks against Multimodal Large Language Models via Monitoring Hidden States☆133Updated 2 months ago
- [CVPR2024] MMA-Diffusion: MultiModal Attack on Diffusion Models☆168Updated 2 months ago
- Revisiting and Exploring Efficient Fast Adversarial Training via LAW: Lipschitz Regularization and Auto Weight Averaging (TIFS2024)☆35Updated last year
- Improved techniques for optimization-based jailbreaking on large language models (ICLR2025)☆125Updated 4 months ago
- [NAACL 2025] SIUO: Cross-Modality Safety Alignment☆112Updated 7 months ago
- YiJian-Comunity: a full-process automated large model safety evaluation tool designed for academic research☆114Updated 10 months ago
- [ICML22] "Revisiting and Advancing Fast Adversarial Training through the Lens of Bi-level Optimization" by Yihua Zhang*, Guanhua Zhang*, …☆65Updated 2 years ago
- ☆106Updated last week
- Practical Detection of Trojan Neural Networks☆120Updated 4 years ago
- [MM24 Oral] Identity-Driven Multimedia Forgery Detection via Reference Assistance☆106Updated last month
- [ICML 2025] An official source code for paper "FlipAttack: Jailbreak LLMs via Flipping".☆131Updated 3 months ago
- ☆79Updated 2 months ago
- ☆29Updated 5 months ago
- [ICLR 2023] Official Tensorflow implementation of "Distributionally Robust Post-hoc Classifiers under Prior Shifts"☆33Updated last year
- A comprehensive collection of resources focused on addressing and understanding hallucination phenomena in MLLMs.☆34Updated last year
- A Unified Benchmark & Codebase for All-Domain Fake Image Detection and Localization☆129Updated last month
- Attack classification models with transferability, black-box attack; unrestricted adversarial attacks on imagenet, CVPR2021 安全AI挑战者计划第六期:…☆49Updated 4 years ago
- ☆17Updated last year
- A curated list of awesome papers related to adversarial attacks and defenses for information retrieval. If I missed any papers, feel free…☆219Updated last year
- ☆70Updated 8 months ago
- CVPR 2022 Workshop Robust Classification☆78Updated 3 years ago
- [USENIX Security '24] Dataset associated with real-world malicious LLM applications, including 45 malicious prompts for generating malici…☆63Updated 10 months ago
- Official repo for paper "Large Language Models can be Guided to Evade AI-Generated Text Detection" in TMLR 2024.☆66Updated 2 years ago
- [NeurIPS 2024] GuardT2I: Defending Text-to-Image Models from Adversarial Prompts☆53Updated 2 months ago
- [NeurIPS22] "Advancing Model Pruning via Bi-level Optimization" by Yihua Zhang*, Yuguang Yao*, Parikshit Ram, Pu Zhao, Tianlong Chen, Min…☆118Updated 2 years ago
- [ICLR 2025] Official implementation of paper "Improving Data Efficiency via Curating LLM-Driven Rating Systems"☆97Updated 5 months ago
- AISafetyLab: A comprehensive framework covering safety attack, defense, evaluation and paper list.☆200Updated this week