Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"
☆136Jun 5, 2024Updated last year
Alternatives and similar repositories for HalluQA
Users that are interested in HalluQA are comparing it to the libraries listed below
Sorting:
- Code and data for the FACTOR paper☆53Nov 15, 2023Updated 2 years ago
- TruthfulQA: Measuring How Models Imitate Human Falsehoods☆885Jan 16, 2025Updated last year
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆85Feb 5, 2024Updated 2 years ago
- 面向中文大模型价值观的评估与对齐研究☆553Jul 20, 2023Updated 2 years ago
- 大模型多维度中文对齐评测基准 (ACL 2024)☆420Oct 25, 2025Updated 4 months ago
- [ACL 2024] User-friendly evaluation framework: Eval Suite & Benchmarks: UHGEval, HaluEval, HalluQA, etc.☆180Jun 7, 2025Updated 8 months ago
- A python tool help to interact with chatgpt.☆10Dec 11, 2022Updated 3 years ago
- ☆99Dec 5, 2023Updated 2 years ago
- ☆21Aug 19, 2024Updated last year
- [Findings of EMNLP'2024] Unified Active Retrieval for Retrieval Augmented Generation☆23Sep 30, 2024Updated last year
- FacTool: Factuality Detection in Generative AI☆913Aug 19, 2024Updated last year
- Chinese safety prompts for evaluating and improving the safety of LLMs. 中文安全prompts,用于评估和提升大模型的安全性。☆1,129Feb 27, 2024Updated 2 years ago
- This is the repository of HaluEval, a large-scale hallucination evaluation benchmark for Large Language Models.☆554Feb 12, 2024Updated 2 years ago
- [ACL'24 Outstanding] Data and code for L-Eval, a comprehensive long context language models evaluation benchmark☆393Jul 9, 2024Updated last year
- Source code for paper "ATP: AMRize Than Parse! Enhancing AMR Parsing with PseudoAMRs" @NAACL-2022☆14Mar 31, 2023Updated 2 years ago
- ☆109Jul 15, 2025Updated 7 months ago
- ☆49Jan 7, 2024Updated 2 years ago
- [ACL'2024 Findings] GAOKAO-MM: A Chinese Human-Level Benchmark for Multimodal Models Evaluation☆77Mar 13, 2024Updated last year
- Towards Systematic Measurement for Long Text Quality☆37Sep 5, 2024Updated last year
- This is the official implementation of "Progressive-Hint Prompting Improves Reasoning in Large Language Models"☆209Oct 11, 2023Updated 2 years ago
- [ACL2024] T-Eval: Evaluating Tool Utilization Capability of Large Language Models Step by Step☆304Apr 3, 2024Updated last year
- Do Large Language Models Know What They Don’t Know?☆102Nov 8, 2024Updated last year
- ☆21Mar 19, 2021Updated 4 years ago
- ☆921May 22, 2024Updated last year
- [NeurIPS 2024] Can Language Models Learn to Skip Steps?☆22Jan 25, 2025Updated last year
- Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large …☆1,076Sep 27, 2025Updated 5 months ago
- a survey of long-context LLMs from four perspectives, architecture, infrastructure, training, and evaluation☆61Mar 31, 2025Updated 11 months ago
- [LREC] MMChat: Multi-Modal Chat Dataset on Social Media☆108Sep 25, 2022Updated 3 years ago
- Exchange-of-Thought: Enhancing Large Language Model Capabilities through Cross-Model Communication☆21Mar 21, 2024Updated last year
- GAOKAO-Bench is an evaluation framework that utilizes GAOKAO questions as a dataset to evaluate large language models.☆720Jan 7, 2025Updated last year
- ☆143May 14, 2025Updated 9 months ago
- [ACL 23] CodeIE: Large Code Generation Models are Better Few-Shot Information Extractors☆40Dec 14, 2025Updated 2 months ago
- [NeurIPS'22 Spotlight] Data and code for our paper CoNT: Contrastive Neural Text Generation☆152May 10, 2023Updated 2 years ago
- TOD-Flow: Modeling the Structure of Task-Oriented Dialogues☆13Feb 7, 2024Updated 2 years ago
- ☆89Nov 11, 2022Updated 3 years ago
- Math24o: 高中奥林匹克数学竞赛测评集 High School Olympiad Mathematics Chinese Benchmark☆11Mar 27, 2025Updated 11 months ago
- Flames is a highly adversarial benchmark in Chinese for LLM's harmlessness evaluation developed by Shanghai AI Lab and Fudan NLP Group.☆63May 21, 2024Updated last year
- CMMLU: Measuring massive multitask language understanding in Chinese☆804Dec 6, 2024Updated last year
- [Findings of ACL'2023] Improving Contrastive Learning of Sentence Embeddings from AI Feedback☆40Aug 14, 2023Updated 2 years ago