S-Eval: Towards Automated and Comprehensive Safety Evaluation for Large Language Models
☆109Feb 13, 2026Updated 2 weeks ago
Alternatives and similar repositories for S-Eval
Users that are interested in S-Eval are comparing it to the libraries listed below
Sorting:
- Flames is a highly adversarial benchmark in Chinese for LLM's harmlessness evaluation developed by Shanghai AI Lab and Fudan NLP Group.☆63May 21, 2024Updated last year
- ☆20May 31, 2024Updated last year
- SG-Bench: Evaluating LLM Safety Generalization Across Diverse Tasks and Prompt Types☆25Nov 29, 2024Updated last year
- SC-Safety: 中文大模型多轮对抗安全基准☆150Mar 15, 2024Updated last year
- ☆11Mar 13, 2023Updated 2 years ago
- Consuming Resrouce via Auto-generation for LLM-DoS Attack under Black-box Settings☆18Sep 1, 2025Updated 6 months ago
- Official github repo for SafetyBench, a comprehensive benchmark to evaluate LLMs' safety. [ACL 2024]☆272Jul 28, 2025Updated 7 months ago
- 【ACL 2024】 SALAD benchmark & MD-Judge☆171Mar 8, 2025Updated 11 months ago
- AISafetyLab: A comprehensive framework covering safety attack, defense, evaluation and paper list.☆231Aug 29, 2025Updated 6 months ago
- LLM evaluation.☆16Nov 7, 2023Updated 2 years ago
- Code release for RobOT (ICSE'21)☆15Dec 5, 2022Updated 3 years ago
- SuperCLUE-Agent: 基于中文原生任务的Agent智能体核心能力测评基准☆94Nov 9, 2023Updated 2 years ago
- Reproducible Language Agent Research☆34Jun 25, 2025Updated 8 months ago
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆568Updated this week
- The repo for using the model https://huggingface.co/thu-coai/Attacker-v0.1☆13Apr 23, 2025Updated 10 months ago
- ☆12Mar 5, 2025Updated 11 months ago
- The code implementation of MuScleLoRA (Accepted in ACL 2024)☆10Dec 1, 2024Updated last year
- ☆11Jan 3, 2024Updated 2 years ago
- "他山之石、可以攻玉":复旦白泽智能发布面向国内开源和国外商用大模型的Demo数据集JADE-DB☆496Nov 18, 2025Updated 3 months ago
- ☆34Jan 25, 2026Updated last month
- 中文原生等级化代码能力测试基准☆15Apr 11, 2024Updated last year
- [ICML 2025] Official repository for paper "OR-Bench: An Over-Refusal Benchmark for Large Language Models"☆23Mar 4, 2025Updated 11 months ago
- The code and datasets of our ACM MM 2024 paper "Hallu-PI: Evaluating Hallucination in Multi-modal Large Language Models within Perturbed …☆11Sep 27, 2024Updated last year
- [AAAI 2026] ReCode: Reinforced Code Knowledge Editing for API Updates☆22Jul 1, 2025Updated 8 months ago
- Official repository of Graph RAG-Tool Fusion and ToolLinkOS dataset.☆22Feb 13, 2025Updated last year
- Instruction Following Eval☆15Jan 16, 2025Updated last year
- CoV: Chain-of-View Prompting for Spatial Reasoning☆51Jan 23, 2026Updated last month
- White-box Fairness Testing through Adversarial Sampling☆13Apr 16, 2021Updated 4 years ago
- A Lightweight Visual Reasoning Benchmark for Evaluating Large Multimodal Models through Complex Diagrams in Coding Tasks☆14Feb 25, 2025Updated last year
- Chinese safety prompts for evaluating and improving the safety of LLMs. 中文安全prompts,用于评估和提升大模型的安全性。☆1,129Feb 27, 2024Updated 2 years ago
- Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs☆314Jun 7, 2024Updated last year
- ☆26Feb 1, 2023Updated 3 years ago
- [ICLR 2024]Data for "Multilingual Jailbreak Challenges in Large Language Models"☆99Mar 7, 2024Updated last year
- [AAAI'25] CharacterBench: Benchmarking Character Customization of Large Language Models☆19Aug 1, 2025Updated 7 months ago
- ☆22Jan 14, 2025Updated last year
- ☆16Nov 26, 2024Updated last year
- An unofficial implementation of SOLAR-10.7B model and the newly proposed interlocked-DUS(iDUS) implementation and experiment details.☆14Mar 20, 2024Updated last year
- [ACL 2025] Can MLLMs Understand the Deep Implication Behind Chinese Images?☆20Oct 20, 2025Updated 4 months ago
- Benchmark evaluation code for "SORRY-Bench: Systematically Evaluating Large Language Model Safety Refusal" (ICLR 2025)☆76Mar 1, 2025Updated last year