STAIR-BUPT / JailBench
JailBench:大型语言模型越狱攻击风险评测中文数据集 [PAKDD 2025]
☆57Updated last week
Alternatives and similar repositories for JailBench:
Users that are interested in JailBench are comparing it to the libraries listed below
- 复旦白泽大模型安全基准测试集(2024年夏季版)☆33Updated 7 months ago
- SC-Safety: 中文大模型多轮对抗安全基准☆125Updated 11 months ago
- "他山之石、可以攻玉":复旦白泽智能发布面向国内开源和国外商用大模型的Demo数据集JADE-DB☆383Updated 2 weeks ago
- [USENIX Security'24] Official repository of "Making Them Ask and Answer: Jailbreaking Large Language Models in Few Queries via Disguise a…☆75Updated 5 months ago
- ShieldLM: Empowering LLMs as Aligned, Customizable and Explainable Safety Detectors [EMNLP 2024 Findings]☆176Updated 5 months ago
- ☆43Updated 10 months ago
- Official github repo for SafetyBench, a comprehensive benchmark to evaluate LLMs' safety. [ACL 2024]☆197Updated 8 months ago
- 本文提出了一个基于“文心一言”的中国LLMs的安全评估基准,其中包括8种典型的安全场景和6种指令攻击类型。此外,本文还提出了安全评估的框架和过程,利用手动编写和收集开源数据的测试Prompts,以及人工干预结合利用LLM强大的评估能力作为“共同评估者”。☆23Updated last year
- SecProbe:任务驱动式大模型安全能力评测系统☆12Updated 3 months ago
- This project aims to consolidate and share high-quality resources and tools across the cybersecurity domain.☆148Updated 2 months ago
- Flames is a highly adversarial benchmark in Chinese for LLM's harmlessness evaluation developed by Shanghai AI Lab and Fudan NLP Group.☆43Updated 9 months ago
- ☆114Updated 5 months ago
- [NAACL2024] Attacks, Defenses and Evaluations for LLM Conversation Safety: A Survey☆89Updated 7 months ago
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆455Updated 5 months ago
- AutoMSS 是基于AI Agent实现的针对安全事件自动化分析研判的系统,由cloud Totem团队开发,希望有兴趣的朋友可以一起参与进来更新和完善。邮箱联系:automss@cloud-totem.com☆42Updated 9 months ago
- AutoAudit—— the LLM for Cyber Security 网络安全大语言模型☆311Updated last week
- Chinese safety prompts for evaluating and improving the safety of LLMs. 中文安全prompts,用于评估和提升大模型的安全性。☆945Updated last year
- [arXiv:2311.03191] "DeepInception: Hypnotize Large Language Model to Be Jailbreaker"☆135Updated last year
- 针对大语言模型的对抗性攻击总结☆22Updated last year
- The official dataset of paper "Goal-Oriented Prompt Attack and Safety Evaluation for LLMs".☆15Updated last year
- Official implementation of paper: DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM Jailbreakers☆47Updated 6 months ago
- The opensoure repository of FuzzLLM☆22Updated 10 months ago
- The official implementation of our NAACL 2024 paper "A Wolf in Sheep’s Clothing: Generalized Nested Jailbreak Prompts can Fool Large Lang…☆94Updated last month
- LLM 安全资料收集与学习☆17Updated 8 months ago
- Chain of Attack: a Semantic-Driven Contextual Multi-Turn attacker for LLM☆29Updated last month
- ☆88Updated 11 months ago
- CS-Eval is a comprehensive evaluation suite for fundamental cybersecurity models or large language models' cybersecurity ability.☆35Updated 3 months ago
- Code for Findings-EMNLP 2023 paper: Multi-step Jailbreaking Privacy Attacks on ChatGPT☆30Updated last year