ParticleMedia / RAGTruthLinks
Github repository for "RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models"
☆223Updated last year
Alternatives and similar repositories for RAGTruth
Users that are interested in RAGTruth are comparing it to the libraries listed below
Sorting:
- ☆187Updated 7 months ago
- ☆294Updated 2 years ago
- ToolQA, a new dataset to evaluate the capabilities of LLMs in answering challenging questions with external tools. It offers two levels …☆285Updated 2 years ago
- [EMNLP 2023] Enabling Large Language Models to Generate Text with Citations. Paper: https://arxiv.org/abs/2305.14627☆509Updated last year
- RARR: Researching and Revising What Language Models Say, Using Language Models☆51Updated 2 years ago
- A package to evaluate factuality of long-form generation. Original implementation of our EMNLP 2023 paper "FActScore: Fine-grained Atomic…☆415Updated 9 months ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆129Updated last year
- RECOMP: Improving Retrieval-Augmented LMs with Compression and Selective Augmentation.☆144Updated last month
- Repository for MuSiQue: Multi-hop Questions via Single-hop Question Composition, TACL 2022☆189Updated last year
- ACL2023 - AlignScore, a metric for factual consistency evaluation.☆148Updated last year
- What's In My Big Data (WIMBD) - a toolkit for analyzing large text datasets☆227Updated last year
- ☆59Updated 2 months ago
- [NAACL'24] Dataset, code and models for "TableLlama: Towards Open Large Generalist Models for Tables".☆136Updated last year
- [ICLR 2025] BRIGHT: A Realistic and Challenging Benchmark for Reasoning-Intensive Retrieval☆189Updated 4 months ago
- Code and data for "Lost in the Middle: How Language Models Use Long Contexts"☆373Updated 2 years ago
- [ICLR 2025] InstructRAG: Instructing Retrieval-Augmented Generation via Self-Synthesized Rationales☆135Updated last year
- Fact-Checking the Output of Generative Large Language Models in both Annotation and Evaluation.☆111Updated 2 years ago
- Comprehensive benchmark for RAG☆260Updated 7 months ago
- [EMNLP 2024 (Oral)] Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QA☆146Updated last month
- [Data + code] ExpertQA : Expert-Curated Questions and Attributed Answers☆136Updated last year
- Generative Judge for Evaluating Alignment☆250Updated 2 years ago
- Repository for Interleaving Retrieval with Chain-of-Thought Reasoning for Knowledge-Intensive Multi-Step Questions, ACL23☆249Updated last year
- A Survey of Attributions for Large Language Models☆222Updated 3 weeks ago
- Dense X Retrieval: What Retrieval Granularity Should We Use?☆168Updated 2 years ago
- Multilingual Large Language Models Evaluation Benchmark☆133Updated last year
- Benchmarking library for RAG☆255Updated last week
- [ACL 2025] AIR-Bench: Automated Heterogeneous Information Retrieval Benchmark☆165Updated 3 months ago
- This is the repository of HaluEval, a large-scale hallucination evaluation benchmark for Large Language Models.☆552Updated last year
- [NAACL 2024] End-to-End Beam Retrieval for Multi-Hop Question Answering☆124Updated last year
- Official repository for paper "ReasonIR Training Retrievers for Reasoning Tasks".☆217Updated 7 months ago