chenyiqun / MMOA-RAGLinks
This is the code of MMOA-RAG.
☆69Updated 3 months ago
Alternatives and similar repositories for MMOA-RAG
Users that are interested in MMOA-RAG are comparing it to the libraries listed below
Sorting:
- The code and data of DPA-RAG, accepted by WWW 2025 main conference.☆61Updated 7 months ago
- [ICLR 2025] InstructRAG: Instructing Retrieval-Augmented Generation via Self-Synthesized Rationales☆116Updated 6 months ago
- ☆144Updated 2 months ago
- ☆62Updated last month
- [ICLR 2025] This is the code repo for our ICLR’25 paper "RAG-DDR: Optimizing Retrieval-Augmented Generation Using Differentiable Data Rew…☆42Updated 6 months ago
- [EMNLP 2024 (Oral)] Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QA☆139Updated 9 months ago
- ☆67Updated 2 months ago
- [ICLR 2025] Benchmarking Agentic Workflow Generation☆118Updated 6 months ago
- Chain-of-Agents: End-to-End Agent Foundation Models via Multi-Agent Distillation and Agentic RL.☆36Updated this week
- 🔧Tool-Star: Empowering LLM-brained Multi-Tool Reasoner via Reinforcement Learning☆236Updated last week
- This the implementation of LeCo☆31Updated 7 months ago
- ☆53Updated 6 months ago
- A comrephensive collection of learning from rewards in the post-training and test-time scaling of LLMs, with a focus on both reward model…☆53Updated 2 months ago
- Code for the paper: Metacognitive Retrieval-Augmented Large Language Models☆34Updated last year
- Code implementation of synthetic continued pretraining☆123Updated 7 months ago
- MARFT stands for Multi-Agent Reinforcement Fine-Tuning. This repository implements an LLM-based multi-agent reinforcement fine-tuning fra…☆59Updated last week
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction☆77Updated 5 months ago
- This is the official GitHub repository for our survey paper "Beyond Single-Turn: A Survey on Multi-Turn Interactions with Large Language …☆96Updated 3 months ago
- [ACL'25] We propose a novel fine-tuning method, Separate Memory and Reasoning, which combines prompt tuning with LoRA.☆73Updated last month
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆51Updated 2 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆127Updated 5 months ago
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learning☆108Updated 3 months ago
- A Comprehensive Library for Memory of LLM-based Agents.☆65Updated 3 months ago
- RM-R1: Unleashing the Reasoning Potential of Reward Models☆122Updated last month
- ☆47Updated 6 months ago
- [Neurips2024] Source code for xRAG: Extreme Context Compression for Retrieval-augmented Generation with One Token☆148Updated last year
- official implementation of paper "Process Reward Model with Q-value Rankings"☆60Updated 6 months ago
- [NAACL 2025] The official implementation of paper "Learning From Failure: Integrating Negative Examples when Fine-tuning Large Language M…☆27Updated last year
- Code for the 2025 ACL publication "Fine-Tuning on Diverse Reasoning Chains Drives Within-Inference CoT Refinement in LLMs"☆31Updated last month
- ☆132Updated 5 months ago