OpenMOSS / HalluQA
Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"
☆118Updated 8 months ago
Alternatives and similar repositories for HalluQA:
Users that are interested in HalluQA are comparing it to the libraries listed below
- Clustering and Ranking: Diversity-preserved Instruction Selection through Expert-aligned Quality Estimation☆74Updated 3 months ago
- ☆130Updated 10 months ago
- Benchmarking Complex Instruction-Following with Multiple Constraints Composition (NeurIPS 2024 Datasets and Benchmarks Track)☆64Updated this week
- MEASURING MASSIVE MULTITASK CHINESE UNDERSTANDING☆88Updated 10 months ago
- [ACL 2024] FollowBench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Models☆96Updated 2 months ago
- ☆94Updated last year
- 中文大语言模型评测第二期☆70Updated last year
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆38Updated 11 months ago
- ☆80Updated last year
- ☆125Updated last year
- InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuning☆240Updated last year
- 中文大语言模型评测第一期☆108Updated last year
- ☆45Updated 8 months ago
- ☆139Updated 7 months ago
- 🐋 An unofficial implementation of Self-Alignment with Instruction Backtranslation.☆136Updated 7 months ago
- Code and data for the paper "Can Large Language Models Understand Real-World Complex Instructions?"(AAAI2024)☆47Updated 10 months ago
- ☆162Updated last year
- A Massive Multi-Level Multi-Subject Knowledge Evaluation benchmark☆100Updated last year
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆142Updated 5 months ago
- [COLING 2025] ToolEyes: Fine-Grained Evaluation for Tool Learning Capabilities of Large Language Models in Real-world Scenarios☆65Updated 2 months ago
- EMNLP'2024: Knowledge Verification to Nip Hallucination in the Bud☆22Updated 11 months ago
- [NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other mo…☆337Updated 5 months ago
- [ACL 2024] MT-Bench-101: A Fine-Grained Benchmark for Evaluating Large Language Models in Multi-Turn Dialogues☆68Updated 6 months ago
- ☆96Updated 11 months ago
- Finetuning LLaMA with RLHF (Reinforcement Learning with Human Feedback) based on DeepSpeed Chat☆112Updated last year
- 怎么训练一个LLM分词器☆140Updated last year
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆77Updated last year
- Source code for ACL 2023 paper Decoder Tuning: Efficient Language Understanding as Decoding☆48Updated last year
- ☆95Updated 4 months ago
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆233Updated 3 months ago