AI45Lab / Flames
Flames is a highly adversarial benchmark in Chinese for LLM's harmlessness evaluation developed by Shanghai AI Lab and Fudan NLP Group.
☆38Updated 8 months ago
Alternatives and similar repositories for Flames:
Users that are interested in Flames are comparing it to the libraries listed below
- ☆78Updated last year
- Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"☆115Updated 7 months ago
- Benchmarking Complex Instruction-Following with Multiple Constraints Composition (NeurIPS 2024 Datasets and Benchmarks Track)☆60Updated this week
- ☆137Updated 6 months ago
- Awesome papers for role-playing with language models☆154Updated 2 months ago
- Code for "FollowBench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Models (ACL 2024)"☆94Updated last month
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆138Updated 4 months ago
- Clustering and Ranking: Diversity-preserved Instruction Selection through Expert-aligned Quality Estimation☆73Updated 2 months ago
- Official github repo for AutoDetect, an automated weakness detection framework for LLMs.☆41Updated 7 months ago
- ☆48Updated 10 months ago
- ☆209Updated 2 months ago
- BeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).☆120Updated last year
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆107Updated 2 months ago
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆100Updated 4 months ago
- repository for CharacterChat, a personalized social support system☆64Updated 6 months ago
- ☆128Updated 9 months ago
- [COLING 2025] ToolEyes: Fine-Grained Evaluation for Tool Learning Capabilities of Large Language Models in Real-world Scenarios☆64Updated last month
- Llama-3-SynE: A Significantly Enhanced Version of Llama-3 with Advanced Scientific Reasoning and Chinese Language Capabilities | 继续预训练提升 …☆31Updated last month
- R-Judge: Benchmarking Safety Risk Awareness for LLM Agents (EMNLP Findings 2024)☆65Updated 3 months ago
- [ACL 2024] MT-Bench-101: A Fine-Grained Benchmark for Evaluating Large Language Models in Multi-Turn Dialogues☆66Updated 6 months ago
- ☆47Updated last week
- Code and data for the paper "Can Large Language Models Understand Real-World Complex Instructions?"(AAAI2024)☆46Updated 9 months ago
- [EMNLP 2023 Demo] CLEVA: Chinese Language Models EVAluation Platform☆60Updated last year
- A Massive Multi-Level Multi-Subject Knowledge Evaluation benchmark☆100Updated last year
- Official github repo for SafetyBench, a comprehensive benchmark to evaluate LLMs' safety. [ACL 2024]☆185Updated 7 months ago
- Fantastic Data Engineering for Large Language Models☆67Updated last month
- Collection of papers for scalable automated alignment.☆82Updated 3 months ago
- [NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other mo…☆333Updated 4 months ago
- 🐋 An unofficial implementation of Self-Alignment with Instruction Backtranslation.☆136Updated 7 months ago
- ☆52Updated 5 months ago