OpenMOSS / HalluQA
Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"
☆124Updated 10 months ago
Alternatives and similar repositories for HalluQA:
Users that are interested in HalluQA are comparing it to the libraries listed below
- ☆139Updated last year
- Clustering and Ranking: Diversity-preserved Instruction Selection through Expert-aligned Quality Estimation☆78Updated 5 months ago
- ☆81Updated last year
- ☆143Updated 9 months ago
- Benchmarking Complex Instruction-Following with Multiple Constraints Composition (NeurIPS 2024 Datasets and Benchmarks Track)☆76Updated last month
- ☆96Updated last year
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆41Updated last year
- 中文大语言模型评测第二期☆70Updated last year
- 中文大语言模型评测第一期☆108Updated last year
- [ACL 2024] FollowBench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Models☆97Updated 4 months ago
- MEASURING MASSIVE MULTITASK CHINESE UNDERSTANDING☆87Updated last year
- Code and data for the paper "Can Large Language Models Understand Real-World Complex Instructions?"(AAAI2024)☆48Updated last year
- ☆97Updated last year
- [ACL 2024] MT-Bench-101: A Fine-Grained Benchmark for Evaluating Large Language Models in Multi-Turn Dialogues☆83Updated 8 months ago
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆147Updated 7 months ago
- 🐋 An unofficial implementation of Self-Alignment with Instruction Backtranslation.☆139Updated 9 months ago
- InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuning☆251Updated last year
- [NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other mo…☆358Updated 7 months ago
- ☆128Updated last year
- A Massive Multi-Level Multi-Subject Knowledge Evaluation benchmark☆99Updated last year
- ☆160Updated 2 years ago
- [COLING 2025] ToolEyes: Fine-Grained Evaluation for Tool Learning Capabilities of Large Language Models in Real-world Scenarios☆65Updated 4 months ago
- ☆46Updated 10 months ago
- 中文 Instruction tuning datasets☆128Updated last year
- Finetuning LLaMA with RLHF (Reinforcement Learning with Human Feedback) based on DeepSpeed Chat☆115Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated last year
- [EMNLP 2023] Lion: Adversarial Distillation of Proprietary Large Language Models☆205Updated last year
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆79Updated last year
- ☆167Updated last year
- ☆59Updated last year