weizhepei / InstructRAGLinks
[ICLR 2025] InstructRAG: Instructing Retrieval-Augmented Generation via Self-Synthesized Rationales
☆134Updated 11 months ago
Alternatives and similar repositories for InstructRAG
Users that are interested in InstructRAG are comparing it to the libraries listed below
Sorting:
- The code and data of DPA-RAG, accepted by WWW 2025 main conference.☆63Updated 2 months ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆126Updated last year
- [Neurips2024] Source code for xRAG: Extreme Context Compression for Retrieval-augmented Generation with One Token☆167Updated last year
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"☆82Updated last year
- [EMNLP 2024 (Oral)] Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QA☆144Updated 3 weeks ago
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆150Updated last year
- Implementation of the paper: "Making Retrieval-Augmented Language Models Robust to Irrelevant Context"☆76Updated last year
- 🌲 Code for our EMNLP 2023 paper - 🎄 "Tree of Clarifications: Answering Ambiguous Questions with Retrieval-Augmented Large Language Mode…☆52Updated 2 years ago
- Code implementation of synthetic continued pretraining☆145Updated last year
- [ACL 2024] ANAH & [NeurIPS 2024] ANAH-v2 & [ICLR 2025] Mask-DPO☆61Updated 8 months ago
- ☆37Updated 11 months ago
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆139Updated last year
- [ICLR'25] DataGen: Unified Synthetic Dataset Generation via Large Language Models☆64Updated 10 months ago
- Small Models, Big Insights: Leveraging Slim Proxy Models To Decide When and What to Retrieve for LLMs (ACL 2024)☆73Updated 8 months ago
- CORAL: Benchmarking Multi-turn Conversational Retrieval-Augmentation Generation☆65Updated 7 months ago
- RECOMP: Improving Retrieval-Augmented LMs with Compression and Selective Augmentation.