ATIpiu / SafeGenInjectLinks
全球AI攻防挑战赛—赛道一:大模型生图安全疫苗注入第二名解题方案
☆24Updated 9 months ago
Alternatives and similar repositories for SafeGenInject
Users that are interested in SafeGenInject are comparing it to the libraries listed below
Sorting:
- [USENIX Security'24] Official repository of "Making Them Ask and Answer: Jailbreaking Large Language Models in Few Queries via Disguise a…☆102Updated 10 months ago
- Safety at Scale: A Comprehensive Survey of Large Model Safety☆184Updated 6 months ago
- Accepted by IJCAI-24 Survey Track☆212Updated last year
- A list of recent adversarial attack and defense papers (including those on large language models)☆43Updated this week
- 😎 up-to-date & curated list of awesome Attacks on Large-Vision-Language-Models papers, methods & resources.☆371Updated 3 weeks ago
- ☆18Updated 10 months ago
- A collection list of AIGC detection related papers.☆124Updated 10 months ago
- A Simple Baseline Achieving Over 90% Success Rate Against the Strong Black-box Models of GPT-4.5/4o/o1. Paper at: https://arxiv.org/abs/2…☆70Updated 4 months ago
- ☆12Updated 10 months ago
- A toolbox for benchmarking trustworthiness of multimodal large language models (MultiTrust, NeurIPS 2024 Track Datasets and Benchmarks)☆162Updated 2 months ago
- ☆23Updated 7 months ago
- A Survey on Jailbreak Attacks and Defenses against Multimodal Generative Models☆216Updated 3 weeks ago
- Repository for the Paper (AAAI 2024, Oral) --- Visual Adversarial Examples Jailbreak Large Language Models☆234Updated last year
- Awesome Jailbreak, red teaming arxiv papers (Automatically Update Every 12th hours)☆53Updated last week
- Strong baselines for tampered text detection in pure vision domain☆23Updated 8 months ago
- ☆35Updated 11 months ago
- Attack to induce LLMs within hallucinations☆155Updated last year
- Official repository of RiOSWorld☆34Updated last month
- ☆159Updated 7 months ago
- JailBench:大型语言模型越狱攻击风险评测中文数据集 [PAKDD 2025]☆119Updated 6 months ago
- ☆50Updated last year
- [AAAI'25 (Oral)] Jailbreaking Large Vision-language Models via Typographic Visual Prompts☆167Updated 2 months ago
- ShieldLM: Empowering LLMs as Aligned, Customizable and Explainable Safety Detectors [EMNLP 2024 Findings]☆208Updated 11 months ago
- LLMs can be Dangerous Reasoners: Analyzing-based Jailbreak Attack on Large Language Models☆21Updated last month
- This is an official repository of ``VLAttack: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models'' (NeurIPS 2…☆57Updated 5 months ago
- ☆102Updated last year
- ☆46Updated 8 months ago
- The official repository for guided jailbreak benchmark☆18Updated last month
- Panda Guard is designed for researching jailbreak attacks, defenses, and evaluation algorithms for large language models (LLMs).☆45Updated last week
- Code for ACM MM2024 paper: White-box Multimodal Jailbreaks Against Large Vision-Language Models☆30Updated 8 months ago