LetterLiGo / SafeGen_CCS2024Links
[CCS'24] SafeGen: Mitigating Unsafe Content Generation in Text-to-Image Models
☆137Updated 4 months ago
Alternatives and similar repositories for SafeGen_CCS2024
Users that are interested in SafeGen_CCS2024 are comparing it to the libraries listed below
Sorting:
- [NDSS'24] Inaudible Adversarial Perturbation: Manipulating the Recognition of User Speech in Real Time☆57Updated last year
- Code for Semantic-Aligned Adversarial Evolution Triangle for High-Transferability Vision-Language Attack(TPAMI 2025)☆38Updated 2 months ago
- Improving fast adversarial training with prior-guided knowledge (TPAMI2024)☆43Updated last year
- ACL 2025 (Main) HiddenDetect: Detecting Jailbreak Attacks against Multimodal Large Language Models via Monitoring Hidden States☆143Updated 4 months ago
- [CVPR2024] MMA-Diffusion: MultiModal Attack on Diffusion Models☆312Updated last month
- [NAACL 2025] SIUO: Cross-Modality Safety Alignment☆118Updated 9 months ago
- Revisiting and Exploring Efficient Fast Adversarial Training via LAW: Lipschitz Regularization and Auto Weight Averaging (TIFS2024)☆35Updated last year
- Improved techniques for optimization-based jailbreaking on large language models (ICLR2025)☆131Updated 6 months ago
- YiJian-Comunity: a full-process automated large model safety evaluation tool designed for academic research☆114Updated last year
- [ICML22] "Revisiting and Advancing Fast Adversarial Training through the Lens of Bi-level Optimization" by Yihua Zhang*, Guanhua Zhang*, …☆64Updated 3 years ago
- [MM24 Oral] Identity-Driven Multimedia Forgery Detection via Reference Assistance☆111Updated 3 months ago
- Practical Detection of Trojan Neural Networks☆120Updated 4 years ago
- [NeurIPS 2025] An official source code for paper "GuardReasoner-VL: Safeguarding VLMs via Reinforced Reasoning".☆110Updated last month
- ☆81Updated 4 months ago
- [ICML 2025] An official source code for paper "FlipAttack: Jailbreak LLMs via Flipping".☆149Updated 6 months ago
- A Unified Benchmark & Codebase for All-Domain Fake Image Detection and Localization☆161Updated last month
- ☆29Updated 7 months ago
- The official code repo for "Safe Delta: Consistently Preserving Safety when Fine-Tuning LLMs on Diverse Datasets" in ICML 2025.☆56Updated 4 months ago
- CVPR 2022 Workshop Robust Classification☆79Updated 3 years ago
- Code for ACL 2024 long paper: Are AI-Generated Text Detectors Robust to Adversarial Perturbations?☆32Updated last year
- [ICLR 2023] Official Tensorflow implementation of "Distributionally Robust Post-hoc Classifiers under Prior Shifts"☆33Updated last year
- ☆70Updated 10 months ago
- A comprehensive collection of resources focused on addressing and understanding hallucination phenomena in MLLMs.☆34Updated last year
- Attack classification models with transferability, black-box attack; unrestricted adversarial attacks on imagenet, CVPR2021 安全AI挑战者计划第六期:…☆51Updated 4 years ago
- [NeurIPS22] "Advancing Model Pruning via Bi-level Optimization" by Yihua Zhang*, Yuguang Yao*, Parikshit Ram, Pu Zhao, Tianlong Chen, Min…☆117Updated 2 years ago
- Official repo for paper "Large Language Models can be Guided to Evade AI-Generated Text Detection" in TMLR 2024.☆69Updated 2 years ago
- [NeurIPS 2024] GuardT2I: Defending Text-to-Image Models from Adversarial Prompts☆53Updated 4 months ago
- A curated list of awesome papers related to adversarial attacks and defenses for information retrieval. If I missed any papers, feel free…☆218Updated last year
- A curated list of resources dedicated to the safety of Large Vision-Language Models. This repository aligns with our survey titled A Surv…☆156Updated 3 weeks ago
- [USENIX Security '24] Dataset associated with real-world malicious LLM applications, including 45 malicious prompts for generating malici…☆66Updated 3 weeks ago