STAIR-BUPT / JailBench
JailBench:大型语言模型越狱攻击风险评测中文数据集
☆37Updated 7 months ago
Alternatives and similar repositories for JailBench:
Users that are interested in JailBench are comparing it to the libraries listed below
- 复旦白泽大模型安全基准测试集(2024年夏季版)☆32Updated 6 months ago
- SC-Safety: 中文大模型多轮对抗安全基准☆119Updated 11 months ago
- ShieldLM: Empowering LLMs as Aligned, Customizable and Explainable Safety Detectors [EMNLP 2024 Findings]☆173Updated 4 months ago
- ☆44Updated 9 months ago
- "他山之石、可以攻玉":复旦白泽智能发布面向国内开源和国外商用大模型的Demo数据集JADE-DB☆364Updated 2 months ago
- Official github repo for SafetyBench, a comprehensive benchmark to evaluate LLMs' safety. [ACL 2024]☆190Updated 7 months ago
- [USENIX Security'24] Official repository of "Making Them Ask and Answer: Jailbreaking Large Language Models in Few Queries via Disguise a…☆68Updated 4 months ago
- 本文提出了一个基于“文心一言”的中国LLMs的安全评估基准,其中包括8种典型的安全场景和6种指令攻击类型。此外,本文还提出了安全评估的框架和过程,利用手动编写和收集开源数据的测试Prompts,以及人工干预结合利用LLM强大的评估能力作为“共同评估者”。☆22Updated last year
- ☆146Updated 3 weeks ago
- [NAACL2024] Attacks, Defenses and Evaluations for LLM Conversation Safety: A Survey☆87Updated 6 months ago
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆450Updated 4 months ago
- The official dataset of paper "Goal-Oriented Prompt Attack and Safety Evaluation for LLMs".☆15Updated last year
- This project aims to consolidate and share high-quality resources and tools across the cybersecurity domain.☆129Updated last month
- [arXiv:2311.03191] "DeepInception: Hypnotize Large Language Model to Be Jailbreaker"☆134Updated last year
- Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs. Empirical tricks for LLM Jailbreaking. (NeurIPS 2024)☆117Updated 2 months ago
- ☆112Updated 5 months ago
- Chain of Attack: a Semantic-Driven Contextual Multi-Turn attacker for LLM☆28Updated last month
- ☆74Updated 2 weeks ago
- Code for Findings-EMNLP 2023 paper: Multi-step Jailbreaking Privacy Attacks on ChatGPT☆30Updated last year
- Agent Security Bench (ASB)☆62Updated last week
- ☆52Updated 7 months ago
- TAP: An automated jailbreaking method for black-box LLMs☆145Updated 2 months ago
- [ICLR 2024] The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language M…☆292Updated last month
- An easy-to-use Python framework to generate adversarial jailbreak prompts.☆553Updated 5 months ago
- LLMs can be Dangerous Reasoners: Analyzing-based Jailbreak Attack on Large Language Models☆17Updated this week
- SecProbe:任务驱动式大模型安全能力评 测系统☆11Updated 2 months ago
- ☆41Updated 8 months ago
- Chinese safety prompts for evaluating and improving the safety of LLMs. 中文安全prompts,用于评估和提升大模型的安全性。☆925Updated 11 months ago
- 基于ChatGPT构建的中文self-instruct数据集☆113Updated last year
- Attack to induce LLMs within hallucinations☆144Updated 9 months ago