S-Eval: Towards Automated and Comprehensive Safety Evaluation for Large Language Models
☆111Feb 13, 2026Updated last month
Alternatives and similar repositories for S-Eval
Users that are interested in S-Eval are comparing it to the libraries listed below
Sorting:
- Flames is a highly adversarial benchmark in Chinese for LLM's harmlessness evaluation developed by Shanghai AI Lab and Fudan NLP Group.☆63May 21, 2024Updated last year
- SC-Safety: 中文大模型多轮对抗安全基准☆150Mar 15, 2024Updated 2 years ago
- SG-Bench: Evaluating LLM Safety Generalization Across Diverse Tasks and Prompt Types☆24Nov 29, 2024Updated last year
- Consuming Resrouce via Auto-generation for LLM-DoS Attack under Black-box Settings☆18Sep 1, 2025Updated 6 months ago
- ☆11Mar 13, 2023Updated 3 years ago
- Official github repo for SafetyBench, a comprehensive benchmark to evaluate LLMs' safety. [ACL 2024]☆274Jul 28, 2025Updated 7 months ago
- 【ACL 2024】 SALAD benchmark & MD-Judge☆171Mar 8, 2025Updated last year
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆573Feb 27, 2026Updated 3 weeks ago
- Code release for RobOT (ICSE'21)☆15Dec 5, 2022Updated 3 years ago
- LLM evaluation.☆16Nov 7, 2023Updated 2 years ago
- ☆27Feb 1, 2023Updated 3 years ago
- White-box Fairness Testing through Adversarial Sampling☆14Apr 16, 2021Updated 4 years ago
- The repo for using the model https://huggingface.co/thu-coai/Attacker-v0.1☆13Apr 23, 2025Updated 10 months ago
- ☆35Jan 7, 2025Updated last year
- Chinese safety prompts for evaluating and improving the safety of LLMs. 中文安全prompts,用于评估和提升大模型的安全性。☆1,136Feb 27, 2024Updated 2 years ago
- Code to enable layer-level steering in LLMs using sparse auto encoders☆31Sep 18, 2025Updated 6 months ago
- ☆22Jan 14, 2025Updated last year
- Instruction Following Eval☆16Jan 16, 2025Updated last year
- The offical code for paper "What Constitutes a Faithful Summary? Preserving Author Perspectives in News Summarization"☆10Jun 23, 2024Updated last year
- HOD: A Benchmark Dataset for Harmful Object Detection☆36Jun 11, 2025Updated 9 months ago
- Accepted by ECCV 2024☆193Oct 15, 2024Updated last year
- ☆49Feb 25, 2026Updated 3 weeks ago
- Materials for "Multi-property Steering of Large Language Models with Dynamic Activation Composition"☆14Nov 22, 2024Updated last year
- The official code for ``An Engorgio Prompt Makes Large Language Model Babble on''☆22Aug 9, 2025Updated 7 months ago
- ☆11Jan 3, 2024Updated 2 years ago
- Code for paper: AdvKnn: Adversarial Attacks On K-Nearest Neighbor Classifiers With Approximate Gradients☆14Dec 23, 2019Updated 6 years ago
- [ICLR 2024 Spotlight 🔥 ] - [ Best Paper Award SoCal NLP 2023 🏆] - Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modal…☆80Jun 6, 2024Updated last year
- A rebellion to Make legacy Cura engine great again!☆13May 7, 2020Updated 5 years ago
- [ICLR 2024]Data for "Multilingual Jailbreak Challenges in Large Language Models"☆101Mar 7, 2024Updated 2 years ago
- ☆27Jun 5, 2024Updated last year
- [ICLR 2025] Official implementation for "SafeWatch: An Efficient Safety-Policy Following Video Guardrail Model with Transparent Explanati…☆43Feb 11, 2025Updated last year
- [ACL 2025] Can MLLMs Understand the Deep Implication Behind Chinese Images?☆21Oct 20, 2025Updated 5 months ago
- [ICLR 2026] Official Implementation of ProxyThinker: Test-Time Guidance through Small Visual Reasoners.☆20Sep 24, 2025Updated 5 months ago
- Universal and Transferable Attacks on Aligned Language Models☆4,568Aug 2, 2024Updated last year
- 🤗 Transformers: State-of-the-art Natural Language Processing for TensorFlow 2.0 and PyTorch.☆17Jun 5, 2025Updated 9 months ago
- Benchmark evaluation code for "SORRY-Bench: Systematically Evaluating Large Language Model Safety Refusal" (ICLR 2025)☆77Mar 1, 2025Updated last year
- Advances in Neural Information Processing Systems (NeurIPS 2021)☆23Nov 4, 2022Updated 3 years ago
- ☆20Jul 24, 2024Updated last year
- ☆20Jun 24, 2022Updated 3 years ago