Liyan06 / MiniCheckLinks
MiniCheck: Efficient Fact-Checking of LLMs on Grounding Documents [EMNLP 2024]
☆162Updated 4 months ago
Alternatives and similar repositories for MiniCheck
Users that are interested in MiniCheck are comparing it to the libraries listed below
Sorting:
- Github repository for "RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models"☆180Updated 5 months ago
- Code accompanying "How I learned to start worrying about prompt formatting".☆105Updated 8 months ago
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"☆110Updated 8 months ago
- Official repository for paper "ReasonIR Training Retrievers for Reasoning Tasks".☆156Updated last month
- [EMNLP 2024 Findings] OneGen: Efficient One-Pass Unified Generation and Retrieval for LLMs.☆147Updated 6 months ago
- LongEmbed: Extending Embedding Models for Long Context Retrieval (EMNLP 2024)☆135Updated 6 months ago
- Code for In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering☆174Updated 3 months ago
- ☆45Updated last week
- Code for the EMNLP 2024 paper "Detecting and Mitigating Contextual Hallucinations in Large Language Models Using Only Attention Maps"☆126Updated 9 months ago
- Attribute (or cite) statements generated by LLMs back to in-context information.☆235Updated 7 months ago
- Codes and datasets for the paper Measuring and Enhancing Trustworthiness of LLMs in RAG through Grounded Attributions and Learning to Ref…☆56Updated 2 months ago
- Comprehensive benchmark for RAG☆184Updated 6 months ago
- Fact-Checking the Output of Generative Large Language Models in both Annotation and Evaluation.☆98Updated last year
- ☆40Updated 2 months ago
- Benchmarking Chat Assistants on Long-Term Interactive Memory (ICLR 2025)☆102Updated last month
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆141Updated 7 months ago
- AIR-Bench: Automated Heterogeneous Information Retrieval Benchmark☆140Updated 5 months ago
- ☆69Updated last year
- Source code of the paper: RetrievalQA: Assessing Adaptive Retrieval-Augmented Generation for Short-form Open-Domain Question Answering [F…☆64Updated last year
- Beating the GAIA benchmark with Transformers Agents. 🚀☆120Updated 3 months ago
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆220Updated 7 months ago
- Benchmarking LLMs with Challenging Tasks from Real Users☆223Updated 6 months ago
- ☆149Updated last year
- RECOMP: Improving Retrieval-Augmented LMs with Compression and Selective Augmentation.☆132Updated 2 weeks ago
- Codebase accompanying the Summary of a Haystack paper.☆78Updated 8 months ago
- ☆121Updated 11 months ago
- [ICLR 2024] MetaTool Benchmark for Large Language Models: Deciding Whether to Use Tools and Which to Use☆86Updated last year
- ☆109Updated 2 months ago
- LOFT: A 1 Million+ Token Long-Context Benchmark☆198Updated last month
- [NeurIPS 2023] This is the code for the paper `Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias`.☆150Updated last year