amazon-science / RefChecker
RefChecker provides automatic checking pipeline and benchmark dataset for detecting fine-grained hallucinations generated by Large Language Models.
☆341Updated 3 months ago
Alternatives and similar repositories for RefChecker:
Users that are interested in RefChecker are comparing it to the libraries listed below
- RAGChecker: A Fine-grained Framework For Diagnosing RAG☆749Updated 2 months ago
- SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models☆493Updated 7 months ago
- Repository for "MultiHop-RAG: A Dataset for Evaluating Retrieval-Augmented Generation Across Documents" (COLM 2024)☆261Updated 3 months ago
- Github repository for "RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models"☆146Updated 2 months ago
- Automated Evaluation of RAG Systems☆546Updated 3 months ago
- Corrective Retrieval Augmented Generation☆346Updated 4 months ago
- Dense X Retrieval: What Retrieval Granularity Should We Use?☆146Updated last year
- ToolQA, a new dataset to evaluate the capabilities of LLMs in answering challenging questions with external tools. It offers two levels …☆251Updated last year
- RankLLM is a Python toolkit for reproducible information retrieval research using rerankers, with a focus on listwise reranking.☆404Updated this week
- Forward-Looking Active REtrieval-augmented generation (FLARE)☆604Updated last year
- Benchmarking library for RAG☆167Updated this week
- [EMNLP 2023] Enabling Large Language Models to Generate Text with Citations. Paper: https://arxiv.org/abs/2305.14627☆475Updated 4 months ago
- This is an implementation of the paper: Searching for Best Practices in Retrieval-Augmented Generation (EMNLP2024)☆285Updated 2 months ago
- [Preprint] Learning to Filter Context for Retrieval-Augmented Generaton☆189Updated 10 months ago
- Is ChatGPT Good at Search? LLMs as Re-Ranking Agent [EMNLP 2023 Outstanding Paper Award]☆572Updated 11 months ago
- Comprehensive benchmark for RAG☆116Updated 3 months ago
- This is the repository of HaluEval, a large-scale hallucination evaluation benchmark for Large Language Models.☆437Updated last year
- Official repo for "LongRAG: Enhancing Retrieval-Augmented Generation with Long-context LLMs".☆220Updated 5 months ago
- Code and data for "Lost in the Middle: How Language Models Use Long Contexts"☆333Updated last year
- ☆271Updated last year
- Repository for Interleaving Retrieval with Chain-of-Thought Reasoning for Knowledge-Intensive Multi-Step Questions, ACL23☆189Updated 8 months ago
- Evaluation tools for Retrieval-augmented Generation (RAG) methods.☆147Updated 3 months ago
- Code for paper "G-Eval: NLG Evaluation using GPT-4 with Better Human Alignment"☆293Updated last year
- List of papers on hallucination detection in LLMs.☆773Updated 2 months ago
- Guideline following Large Language Model for Information Extraction☆343Updated 3 months ago
- Official Implementation of "Multi-Head RAG: Solving Multi-Aspect Problems with LLMs"☆198Updated 3 months ago
- Data and code for FreshLLMs (https://arxiv.org/abs/2310.03214)☆342Updated this week
- The official repository for the paper: Evaluation of Retrieval-Augmented Generation: A Survey.☆129Updated 4 months ago
- [EMNLP 2024: Demo Oral] RAGLAB: A Modular and Research-Oriented Unified Framework for Retrieval-Augmented Generation☆288Updated 4 months ago
- Official Implementation of NeurIPS 2024 paper "G-Retriever: Retrieval-Augmented Generation for Textual Graph Understanding and Question A…☆396Updated 3 months ago