Flames is a highly adversarial benchmark in Chinese for LLM's harmlessness evaluation developed by Shanghai AI Lab and Fudan NLP Group.
☆63May 21, 2024Updated last year
Alternatives and similar repositories for Flames
Users that are interested in Flames are comparing it to the libraries listed below
Sorting:
- ☆15Mar 22, 2024Updated last year
- S-Eval: Towards Automated and Comprehensive Safety Evaluation for Large Language Models☆109Feb 13, 2026Updated 2 weeks ago
- ☆21Aug 19, 2024Updated last year
- ☆44Jun 19, 2025Updated 8 months ago
- [EMNLP 2024] ”ESC-Eval: Evaluating Emotion Support Conversations in Large Language Models“☆26Jun 24, 2024Updated last year
- 面向中文大模型价值观的评估与对齐研究☆553Jul 20, 2023Updated 2 years ago
- [ICML 2025] Official repository for paper "OR-Bench: An Over-Refusal Benchmark for Large Language Models"☆23Mar 4, 2025Updated 11 months ago
- [EMNLP 2023 Demo] "CLEVA: Chinese Language Models EVAluation Platform"☆63May 16, 2025Updated 9 months ago
- GPT-4 を用いて、言語モデルの応答を自動評価するスクリプト☆16Jun 6, 2024Updated last year
- ☆17Nov 3, 2024Updated last year
- LLM evaluation.☆16Nov 7, 2023Updated 2 years ago
- SG-Bench: Evaluating LLM Safety Generalization Across Diverse Tasks and Prompt Types☆24Nov 29, 2024Updated last year
- Accepted by ECCV 2024☆188Oct 15, 2024Updated last year
- ☆37Jun 25, 2025Updated 8 months ago
- Language Understanding Augmentation Toolkit for Robustness Testing☆20Jan 22, 2023Updated 3 years ago
- ☆48May 9, 2024Updated last year
- Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"☆136Jun 5, 2024Updated last year
- Official github repo for SafetyBench, a comprehensive benchmark to evaluate LLMs' safety. [ACL 2024]☆272Jul 28, 2025Updated 7 months ago
- Chinese safety prompts for evaluating and improving the safety of LLMs. 中文安全prompts,用于评估和提升大模型的安全性。☆1,129Feb 27, 2024Updated 2 years ago
- [NeurIPS 2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI☆107Mar 6, 2025Updated 11 months ago
- ☆29Feb 11, 2025Updated last year
- SC-Safety: 中文大模型多轮对抗安全基准☆150Mar 15, 2024Updated last year
- [ACL 2024] ANAH & [NeurIPS 2024] ANAH-v2 & [ICLR 2025] Mask-DPO☆62Apr 30, 2025Updated 9 months ago
- Multilingual safety benchmark for Large Language Models☆53Sep 1, 2024Updated last year
- ShieldLM: Empowering LLMs as Aligned, Customizable and Explainable Safety Detectors [EMNLP 2024 Findings]☆225Sep 29, 2024Updated last year
- A toolbox for benchmarking trustworthiness of multimodal large language models (MultiTrust, NeurIPS 2024 Track Datasets and Benchmarks)☆174Jun 27, 2025Updated 8 months ago
- 【ACL 2024】 SALAD benchmark & MD-Judge☆171Mar 8, 2025Updated 11 months ago
- ☆99Dec 5, 2023Updated 2 years ago
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆64Jul 8, 2024Updated last year
- BeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).☆175Oct 27, 2023Updated 2 years ago
- Repo for outstanding paper@ACL 2023 "Do PLMs Know and Understand Ontological Knowledge?"☆33Oct 16, 2023Updated 2 years ago
- Benchmark evaluation code for "SORRY-Bench: Systematically Evaluating Large Language Model Safety Refusal" (ICLR 2025)☆76Mar 1, 2025Updated 11 months ago
- ☆28Oct 14, 2021Updated 4 years ago
- FlagEval is an evaluation toolkit for AI large foundation models.☆338Apr 24, 2025Updated 10 months ago
- Official github repo for E-Eval, a Chinese K12 education evaluation benchmark for LLMs.☆29Feb 19, 2024Updated 2 years ago
- ☆113Oct 7, 2025Updated 4 months ago
- ☆32Apr 18, 2021Updated 4 years ago
- "他山之石、可以攻玉":复旦白泽智能发布面向国内开源和国外商用大模型的Demo数据集JADE-DB☆496Nov 18, 2025Updated 3 months ago
- Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs☆314Jun 7, 2024Updated last year