OpenMOSS / HalluQALinks
Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"
☆128Updated 11 months ago
Alternatives and similar repositories for HalluQA
Users that are interested in HalluQA are comparing it to the libraries listed below
Sorting:
- ☆97Updated last year
- Clustering and Ranking: Diversity-preserved Instruction Selection through Expert-aligned Quality Estimation☆79Updated 6 months ago
- ☆141Updated last year
- 中文大语言模型评测第一期☆109Updated last year
- Benchmarking Complex Instruction-Following with Multiple Constraints Composition (NeurIPS 2024 Datasets and Benchmarks Track)☆83Updated 3 months ago
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆40Updated last year
- ☆128Updated 2 years ago
- 中文大语言模型评测第二期☆70Updated last year
- ☆97Updated last year
- MEASURING MASSIVE MULTITASK CHINESE UNDERSTANDING☆87Updated last year
- ☆81Updated last year
- [EMNLP 2023 Demo] "CLEVA: Chinese Language Models EVAluation Platform" [ACL 2025 Findings] "C2LEVA: Toward Comprehensive and Contaminatio…☆62Updated 2 weeks ago
- Code and data for the paper "Can Large Language Models Understand Real-World Complex Instructions?"(AAAI2024)☆48Updated last year
- 中文 Instruction tuning datasets☆131Updated last year
- [ACL 2024] MT-Bench-101: A Fine-Grained Benchmark for Evaluating Large Language Models in Multi-Turn Dialogues☆89Updated 10 months ago
- ☆142Updated 11 months ago
- ☆169Updated last year
- ☆162Updated 2 years ago
- [COLING 2025] ToolEyes: Fine-Grained Evaluation for Tool Learning Capabilities of Large Language Models in Real-world Scenarios☆68Updated 2 weeks ago
- 怎么训练一个LLM分词器☆148Updated last year
- A Massive Multi-Level Multi-Subject Knowledge Evaluation benchmark☆101Updated last year
- ☆63Updated 2 years ago
- [ACL 2024] FollowBench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Models☆100Updated last month
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated last year
- An open-source conversational language model developed by the Knowledge Works Research Laboratory at Fudan University.☆65Updated last year
- ☆94Updated last year
- [NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other mo…☆368Updated 8 months ago
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆153Updated 8 months ago
- Light local website for displaying performances from different chat models.☆86Updated last year
- ☆172Updated 2 years ago