thu-coai / SafetyBench
Official github repo for SafetyBench, a comprehensive benchmark to evaluate LLMs' safety. [ACL 2024]
☆216Updated 10 months ago
Alternatives and similar repositories for SafetyBench:
Users that are interested in SafetyBench are comparing it to the libraries listed below
- ShieldLM: Empowering LLMs as Aligned, Customizable and Explainable Safety Detectors [EMNLP 2024 Findings]☆188Updated 7 months ago
- Flames is a highly adversarial benchmark in Chinese for LLM's harmlessness evaluation developed by Shanghai AI Lab and Fudan NLP Group.☆47Updated 11 months ago
- SC-Safety: 中文大模型多轮对抗安全基准☆134Updated last year
- 【ACL 2024】 SALAD benchmark & MD-Judge☆145Updated last month
- [NAACL2024] Attacks, Defenses and Evaluations for LLM Conversation Safety: A Survey☆94Updated 8 months ago
- S-Eval: Automatic and Adaptive Test Generation for Benchmarking Safety Evaluation of Large Language Models☆64Updated 2 weeks ago
- BeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).☆136Updated last year
- R-Judge: Benchmarking Safety Risk Awareness for LLM Agents (EMNLP Findings 2024)☆76Updated 3 weeks ago
- 🐋 An unofficial implementation of Self-Alignment with Instruction Backtranslation.☆139Updated 10 months ago
- Official github repo for AutoDetect, an automated weakness detection framework for LLMs.☆42Updated 10 months ago
- 面向中文大模型价值观的评估与对齐研究☆510Updated last year
- Generative Judge for Evaluating Alignment☆236Updated last year
- [NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other mo…☆363Updated 8 months ago
- ☆143Updated 10 months ago
- Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"☆125Updated 11 months ago
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆114Updated 7 months ago
- A curated reading list for large language model (LLM) alignment. Take a look at our new survey "Large Language Model Alignment: A Survey"…☆80Updated last year
- Chinese safety prompts for evaluating and improving the safety of LLMs. 中文安全prompts,用于评估和提升大模型的安全性。☆1,008Updated last year
- [ICLR 2024]Data for "Multilingual Jailbreak Challenges in Large Language Models"☆73Updated last year
- ☆81Updated 3 months ago
- "他山之石、可以攻玉":复旦白泽智能发布面向国内开源和国外商用大模型的Demo数据集JADE-DB☆403Updated last month
- LLM Unlearning☆153Updated last year
- ☆44Updated 11 months ago
- Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs☆244Updated 10 months ago
- [COLING 2025] ToolEyes: Fine-Grained Evaluation for Tool Learning Capabilities of Large Language Models in Real-world Scenarios☆65Updated 5 months ago
- Official implementation of paper: DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM Jailbreakers☆52Updated 8 months ago
- Papers about red teaming LLMs and Multimodal models.☆113Updated 5 months ago
- Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs. Empirical tricks for LLM Jailbreaking. (NeurIPS 2024)☆135Updated 5 months ago
- [NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct☆170Updated 3 months ago
- Benchmarking Complex Instruction-Following with Multiple Constraints Composition (NeurIPS 2024 Datasets and Benchmarks Track)☆81Updated 2 months ago