amazon-science / auto-rag-evalLinks
Code repo for the ICML 2024 paper "Automated Evaluation of Retrieval-Augmented Language Models with Task-Specific Exam Generation"
☆85Updated last year
Alternatives and similar repositories for auto-rag-eval
Users that are interested in auto-rag-eval are comparing it to the libraries listed below
Sorting:
- RefChecker provides automatic checking pipeline and benchmark dataset for detecting fine-grained hallucinations generated by Large Langua…☆402Updated 6 months ago
- Automated Evaluation of RAG Systems☆674Updated 8 months ago
- Github repository for "RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models"☆212Updated 11 months ago
- Dense X Retrieval: What Retrieval Granularity Should We Use?☆165Updated last year
- SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models☆578Updated last year
- RankLLM is a Python toolkit for reproducible information retrieval research using rerankers, with a focus on listwise reranking.☆550Updated last week
- Attribute (or cite) statements generated by LLMs back to in-context information.☆300Updated last year
- Benchmarking library for RAG☆248Updated last month
- Comprehensive benchmark for RAG☆242Updated 5 months ago
- ☆146Updated last year
- [Data + code] ExpertQA : Expert-Curated Questions and Attributed Answers☆135Updated last year
- Code for paper "G-Eval: NLG Evaluation using GPT-4 with Better Human Alignment"☆395Updated last year
- A comprehensive guide to LLM evaluation methods designed to assist in identifying the most suitable evaluation techniques for various use…☆154Updated this week
- In-Context Learning for eXtreme Multi-Label Classification (XMC) using only a handful of examples.☆443Updated last year
- awesome synthetic (text) datasets☆310Updated last week
- Official Implementation of "Multi-Head RAG: Solving Multi-Aspect Problems with LLMs"☆234Updated last month
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"☆117Updated last month
- ☆228Updated 11 months ago
- ☆43Updated last year
- ☆45Updated last year
- [ACL 2025] AIR-Bench: Automated Heterogeneous Information Retrieval Benchmark☆161Updated last month
- This is the repository for our paper "INTERS: Unlocking the Power of Large Language Models in Search with Instruction Tuning"☆206Updated 11 months ago
- [ACL'25] Official Code for LlamaDuo: LLMOps Pipeline for Seamless Migration from Service LLMs to Small-Scale Local LLMs☆314Updated 4 months ago
- Model, Code & Data for the EMNLP'23 paper "Making Large Language Models Better Data Creators"☆137Updated 2 years ago
- Fine-Tuning Embedding for RAG with Synthetic Data☆518Updated 2 years ago
- RAGElo is a set of tools that helps you selecting the best RAG-based LLM agents by using an Elo ranker☆123Updated last month
- LangChain, Llama2-Chat, and zero- and few-shot prompting are used to generate synthetic datasets for IR and RAG system evaluation☆37Updated last year
- Repository for "MultiHop-RAG: A Dataset for Evaluating Retrieval-Augmented Generation Across Documents" (COLM 2024)☆391Updated 7 months ago
- Codebase accompanying the Summary of a Haystack paper.☆79Updated last year
- MiniCheck: Efficient Fact-Checking of LLMs on Grounding Documents [EMNLP 2024]☆191Updated 3 months ago