AI45Lab / FlamesLinks
Flames is a highly adversarial benchmark in Chinese for LLM's harmlessness evaluation developed by Shanghai AI Lab and Fudan NLP Group.
☆60Updated last year
Alternatives and similar repositories for Flames
Users that are interested in Flames are comparing it to the libraries listed below
Sorting:
- Official github repo for SafetyBench, a comprehensive benchmark to evaluate LLMs' safety. [ACL 2024]☆251Updated 2 months ago
- 【ACL 2024】 SALAD benchmark & MD-Judge☆161Updated 7 months ago
- [NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other mo…☆396Updated 3 months ago
- Official github repo for AutoDetect, an automated weakness detection framework for LLMs.☆44Updated last year
- ShieldLM: Empowering LLMs as Aligned, Customizable and Explainable Safety Detectors [EMNLP 2024 Findings]☆211Updated last year
- BeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).☆160Updated last year
- Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"☆135Updated last year
- S-Eval: Towards Automated and Comprehensive Safety Evaluation for Large Language Models☆98Updated 3 months ago
- R-Judge: Benchmarking Safety Risk Awareness for LLM Agents (EMNLP Findings 2024)☆89Updated 4 months ago
- InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuning☆276Updated 2 years ago
- [ACL 2024] MT-Bench-101: A Fine-Grained Benchmark for Evaluating Large Language Models in Multi-Turn Dialogues☆119Updated last year
- A curated reading list for large language model (LLM) alignment. Take a look at our new survey "Large Language Model Alignment: A Survey"…☆80Updated 2 years ago
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆132Updated 11 months ago
- Awesome papers for role-playing with language models☆205Updated 11 months ago
- [NAACL2024] Attacks, Defenses and Evaluations for LLM Conversation Safety: A Survey☆106Updated last year
- ☆73Updated 8 months ago
- ☆21Updated last year
- ☆147Updated last year
- Benchmarking Complex Instruction-Following with Multiple Constraints Composition (NeurIPS 2024 Datasets and Benchmarks Track)☆95Updated 7 months ago
- [EMNLP 2023 Demo] "CLEVA: Chinese Language Models EVAluation Platform"☆62Updated 4 months ago
- ☆96Updated last year
- ☆51Updated last year
- WritingBench: A Comprehensive Benchmark for Generative Writing☆121Updated last month
- ☆83Updated last year
- ☆145Updated last year
- Collection of papers for scalable automated alignment.☆93Updated 11 months ago
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆178Updated 3 months ago
- Generative Judge for Evaluating Alignment☆246Updated last year
- LLM hallucination paper list☆323Updated last year
- ☆306Updated last year