Liyan06 / MiniCheckLinks
MiniCheck: Efficient Fact-Checking of LLMs on Grounding Documents [EMNLP 2024]
☆193Updated 4 months ago
Alternatives and similar repositories for MiniCheck
Users that are interested in MiniCheck are comparing it to the libraries listed below
Sorting:
- Code accompanying "How I learned to start worrying about prompt formatting".☆113Updated 7 months ago
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"☆120Updated 2 months ago
- Dense X Retrieval: What Retrieval Granularity Should We Use?☆166Updated 2 years ago
- Github repository for "RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models"☆217Updated last year
- Comprehensive benchmark for RAG☆256Updated 7 months ago
- ☆229Updated 2 months ago
- [EMNLP 2024] OneGen: Efficient One-Pass Unified Generation and Retrieval for LLMs.☆147Updated last year
- Official Implementation of "Multi-Head RAG: Solving Multi-Aspect Problems with LLMs"☆236Updated 3 months ago
- [NeurIPS 2023] This is the code for the paper `Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias`.☆156Updated 2 years ago
- [Preprint] Learning to Filter Context for Retrieval-Augmented Generaton☆197Updated last year
- LongEmbed: Extending Embedding Models for Long Context Retrieval (EMNLP 2024)☆145Updated last year
- Official repository for paper "ReasonIR Training Retrievers for Reasoning Tasks".☆213Updated 6 months ago
- Attribute (or cite) statements generated by LLMs back to in-context information.☆313Updated last year
- This is the repository for our paper "INTERS: Unlocking the Power of Large Language Models in Search with Instruction Tuning"☆207Updated last week
- ☆162Updated last year
- Official repo for "LongRAG: Enhancing Retrieval-Augmented Generation with Long-context LLMs".☆241Updated last year
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆222Updated last month
- Complex Function Calling Benchmark.☆160Updated 11 months ago
- Fact-Checking the Output of Generative Large Language Models in both Annotation and Evaluation.☆110Updated 2 years ago
- Vision Document Retrieval (ViDoRe): Benchmark. Evaluation code for the ColPali paper.☆258Updated 5 months ago
- [EMNLP 2024] A Retrieval Benchmark for Scientific Literature Search☆102Updated last year
- [EMNLP 2023] The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning☆253Updated 2 years ago
- Benchmarking library for RAG☆253Updated 3 months ago
- Source code of the paper: RetrievalQA: Assessing Adaptive Retrieval-Augmented Generation for Short-form Open-Domain Question Answering [F…☆68Updated last year
- Evaluating LLMs with fewer examples☆169Updated last year
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆245Updated last year
- awesome synthetic (text) datasets☆321Updated last week
- ToolQA, a new dataset to evaluate the capabilities of LLMs in answering challenging questions with external tools. It offers two levels …☆284Updated 2 years ago
- The official evaluation suite and dynamic data release for MixEval.☆253Updated last year
- Retrieval Augmented Generation Generalized Evaluation Dataset☆59Updated 6 months ago