[IJCAI 2024] FactCHD: Benchmarking Fact-Conflicting Hallucination Detection
☆90Apr 28, 2024Updated last year
Alternatives and similar repositories for FactCHD
Users that are interested in FactCHD are comparing it to the libraries listed below
Sorting:
- Code and data for the FACTOR paper☆53Nov 15, 2023Updated 2 years ago
- ☆49Jan 7, 2024Updated 2 years ago
- ☆21Aug 19, 2024Updated last year
- ☆22Feb 3, 2024Updated 2 years ago
- The official implementation for Collaborative Word-based Pre-trained Item Representation for Transferable Recommendation.☆25Jan 30, 2024Updated 2 years ago
- SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models☆605Jun 26, 2024Updated last year
- [ACL 2024] ANAH & [NeurIPS 2024] ANAH-v2 & [ICLR 2025] Mask-DPO☆63Apr 30, 2025Updated 10 months ago
- ✨✨ Official repo for "Comparative Analysis of Demonstration Selection Algorithms for LLM In-Context Learning"☆16Nov 8, 2024Updated last year
- List of papers on hallucination detection in LLMs.☆1,060Jan 11, 2026Updated 2 months ago
- ☆18Apr 23, 2024Updated last year
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆64Dec 25, 2023Updated 2 years ago
- Proof system for Fact Verification☆15Jun 7, 2022Updated 3 years ago
- EMNLP'2024: Knowledge Verification to Nip Hallucination in the Bud☆23Mar 10, 2024Updated 2 years ago
- Dataset for the paper "GenWiki: A Dataset of 1.3 Million Content-Sharing Text and Graphs for Unsupervised Graph-to-Text Generation"☆26Jan 2, 2024Updated 2 years ago
- ☆89Nov 11, 2022Updated 3 years ago
- Repository for the paper "Cognitive Mirage: A Review of Hallucinations in Large Language Models"☆49Oct 21, 2023Updated 2 years ago
- [ICLR 2024] This is the official implementation for the paper: "Beyond imitation: Leveraging fine-grained quality signals for alignment"☆10May 5, 2024Updated last year
- Perform facts checks on your conversations with LLMs to catch fake-news, misleading information, and LLMs confusion.☆12Apr 22, 2023Updated 2 years ago
- [ECAI 2023] MonoSKD: General Distillation Framework for Monocular 3D Object Detection via Spearman Correlation Coefficient☆32Dec 8, 2023Updated 2 years ago
- ☆12Nov 30, 2023Updated 2 years ago
- Code for "FactKB: Generalizable Factuality Evaluation using Language Models Enhanced with Factual Knowledge". EMNLP 2023.☆20Dec 25, 2023Updated 2 years ago
- Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large …☆1,078Sep 27, 2025Updated 5 months ago
- Token-level Reference-free Hallucination Detection☆97Jul 25, 2023Updated 2 years ago
- Inspecting and Editing Knowledge Representations in Language Models☆119Jul 24, 2023Updated 2 years ago
- ☆25Aug 1, 2023Updated 2 years ago
- Prompt-Guided Retrieval For Non-Knowledge-Intensive Tasks☆12Sep 1, 2023Updated 2 years ago
- A novel jailbreak attack unveiling an overlooked attack surface inherently in the chain-of-thought reasoning trajectory of LLMs☆22Sep 18, 2025Updated 6 months ago
- [APSIPA ASC 2023] The official code of paper, "FactLLaMA: Optimizing Instruction-Following Language Models with External Knowledge for Au…☆17Mar 7, 2024Updated 2 years ago
- FacTool: Factuality Detection in Generative AI☆916Aug 19, 2024Updated last year
- This is the repository of HaluEval, a large-scale hallucination evaluation benchmark for Large Language Models.☆567Feb 12, 2024Updated 2 years ago
- This is the repository for the paper 'DiaHalu: A Dialogue-level Hallucination Evaluation Benchmark for Large Language Models' (EMNLP2024 …☆18Apr 5, 2025Updated 11 months ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Jan 17, 2024Updated 2 years ago
- A tool to assist in the interpretation of learned features in sparse autoencoders (in particular the four SAE's trained by Joseph Bloom o…☆19Oct 4, 2024Updated last year
- ICL backdoor attack☆17Nov 4, 2024Updated last year
- Code for the EMNLP 2024 paper "Detecting and Mitigating Contextual Hallucinations in Large Language Models Using Only Attention Maps"☆147Oct 13, 2025Updated 5 months ago
- ☆11Oct 8, 2023Updated 2 years ago
- An Easy-to-use Hallucination Detection Framework for LLMs.☆63Apr 21, 2024Updated last year
- [ACL 2024] User-friendly evaluation framework: Eval Suite & Benchmarks: UHGEval, HaluEval, HalluQA, etc.☆180Jun 7, 2025Updated 9 months ago
- ☆38Jan 17, 2025Updated last year