amazon-science / auto-rag-eval
Code repo for the ICML 2024 paper "Automated Evaluation of Retrieval-Augmented Language Models with Task-Specific Exam Generation"
☆71Updated 8 months ago
Alternatives and similar repositories for auto-rag-eval:
Users that are interested in auto-rag-eval are comparing it to the libraries listed below
- ☆141Updated 7 months ago
- Dense X Retrieval: What Retrieval Granularity Should We Use?☆146Updated last year
- Code and Data for "Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering"☆83Updated 6 months ago
- Codebase accompanying the Summary of a Haystack paper.☆74Updated 5 months ago
- RefChecker provides automatic checking pipeline and benchmark dataset for detecting fine-grained hallucinations generated by Large Langua…☆341Updated 3 months ago
- [Data + code] ExpertQA : Expert-Curated Questions and Attributed Answers☆125Updated 11 months ago
- Benchmarking library for RAG☆166Updated last week
- Github repository for "RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models"☆147Updated 2 months ago
- AIR-Bench: Automated Heterogeneous Information Retrieval Benchmark☆125Updated 2 months ago
- ☆117Updated 4 months ago
- RAGElo is a set of tools that helps you selecting the best RAG-based LLM agents by using an Elo ranker☆106Updated last week
- ARAGOG- Advanced RAG Output Grading. Exploring and comparing various Retrieval-Augmented Generation (RAG) techniques on AI research paper…☆101Updated 10 months ago
- Code for Search-in-the-Chain: Towards Accurate, Credible and Traceable Large Language Models for Knowledge-intensive Tasks☆54Updated 10 months ago
- Comprehensive benchmark for RAG☆114Updated 3 months ago
- MiniCheck: Efficient Fact-Checking of LLMs on Grounding Documents [EMNLP 2024]☆124Updated last month
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"☆100Updated 5 months ago
- ☆37Updated 6 months ago
- ☆73Updated last month
- Retrieval Augmented Generation Generalized Evaluation Dataset☆51Updated 3 months ago
- Model, Code & Data for the EMNLP'23 paper "Making Large Language Models Better Data Creators"☆124Updated last year
- ☆108Updated 5 months ago
- ☆139Updated 10 months ago
- LangChain, Llama2-Chat, and zero- and few-shot prompting are used to generate synthetic datasets for IR and RAG system evaluation☆37Updated last year
- Attribute (or cite) statements generated by LLMs back to in-context information.☆197Updated 4 months ago
- The official repository for the paper: Evaluation of Retrieval-Augmented Generation: A Survey.☆129Updated 4 months ago
- Automated Evaluation of RAG Systems☆547Updated 3 months ago
- LongEmbed: Extending Embedding Models for Long Context Retrieval (EMNLP 2024)☆128Updated 3 months ago
- WorkBench: a Benchmark Dataset for Agents in a Realistic Workplace Setting.☆37Updated 6 months ago
- The repository contains generative AI analytics platform application code.☆23Updated 3 months ago
- Official Implementation of "Multi-Head RAG: Solving Multi-Aspect Problems with LLMs"☆198Updated 3 months ago