amazon-science / auto-rag-evalLinks
Code repo for the ICML 2024 paper "Automated Evaluation of Retrieval-Augmented Language Models with Task-Specific Exam Generation"
☆81Updated last year
Alternatives and similar repositories for auto-rag-eval
Users that are interested in auto-rag-eval are comparing it to the libraries listed below
Sorting:
- RefChecker provides automatic checking pipeline and benchmark dataset for detecting fine-grained hallucinations generated by Large Langua…☆383Updated 3 months ago
- Github repository for "RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models"☆196Updated 9 months ago
- Automated Evaluation of RAG Systems☆647Updated 5 months ago
- Comprehensive benchmark for RAG☆211Updated 2 months ago
- Code for paper "G-Eval: NLG Evaluation using GPT-4 with Better Human Alignment"☆373Updated last year
- Dense X Retrieval: What Retrieval Granularity Should We Use?☆160Updated last year
- RankLLM is a Python toolkit for reproducible information retrieval research using rerankers, with a focus on listwise reranking.☆525Updated last week
- Benchmarking library for RAG☆224Updated last month
- ☆203Updated 8 months ago
- A comprehensive guide to LLM evaluation methods designed to assist in identifying the most suitable evaluation techniques for various use…☆138Updated last week
- SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models☆557Updated last year
- ☆204Updated last year
- ☆145Updated last year
- awesome synthetic (text) datasets☆295Updated last month
- Repository for "MultiHop-RAG: A Dataset for Evaluating Retrieval-Augmented Generation Across Documents" (COLM 2024)☆356Updated 5 months ago
- In-Context Learning for eXtreme Multi-Label Classification (XMC) using only a handful of examples.☆435Updated last year
- [Data + code] ExpertQA : Expert-Curated Questions and Attributed Answers☆132Updated last year
- Banishing LLM Hallucinations Requires Rethinking Generalization☆276Updated last year
- RAGElo is a set of tools that helps you selecting the best RAG-based LLM agents by using an Elo ranker☆114Updated this week
- [ICLR 2024 & NeurIPS 2023 WS] An Evaluator LM that is open-source, offers reproducible evaluation, and inexpensive to use. Specifically d…☆305Updated last year
- This is the repository for our paper "INTERS: Unlocking the Power of Large Language Models in Search with Instruction Tuning"☆204Updated 8 months ago
- ☆42Updated last year
- Official Implementation of "Multi-Head RAG: Solving Multi-Aspect Problems with LLMs"☆226Updated 2 months ago
- Vision Document Retrieval (ViDoRe): Benchmark. Evaluation code for the ColPali paper.☆233Updated 3 weeks ago
- xLAM: A Family of Large Action Models to Empower AI Agent Systems☆544Updated last week
- Attribute (or cite) statements generated by LLMs back to in-context information.☆274Updated 10 months ago
- Official repository for ORPO☆463Updated last year
- Model, Code & Data for the EMNLP'23 paper "Making Large Language Models Better Data Creators"☆135Updated last year
- [ACL 2025] AIR-Bench: Automated Heterogeneous Information Retrieval Benchmark☆153Updated last month
- ARAGOG- Advanced RAG Output Grading. Exploring and comparing various Retrieval-Augmented Generation (RAG) techniques on AI research paper…☆109Updated last year