MingyuJ666 / Disentangling-Memory-and-ReasoningLinks
[ACL'25] We propose a novel fine-tuning method, Separate Memory and Reasoning, which combines prompt tuning with LoRA.
☆54Updated 2 weeks ago
Alternatives and similar repositories for Disentangling-Memory-and-Reasoning
Users that are interested in Disentangling-Memory-and-Reasoning are comparing it to the libraries listed below
Sorting:
- Official code implementation for the ACL 2025 paper: 'CoT-based Synthesizer: Enhancing LLM Performance through Answer Synthesis'☆27Updated 2 weeks ago
- ☆89Updated last week
- Implementation for the research paper "Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision".☆54Updated 6 months ago
- ☆22Updated 10 months ago
- Official codebase for "GenPRM: Scaling Test-Time Compute of Process Reward Models via Generative Reasoning".☆73Updated this week
- This the implementation of LeCo☆31Updated 4 months ago
- ☆105Updated 2 months ago
- The official code of paper “Tool-Star: Empowering LLM-brained Multi-Tool Reasoner via Reinforcement Learning”☆117Updated this week
- ☆42Updated 2 months ago
- [ICLR 2025] Benchmarking Agentic Workflow Generation☆94Updated 3 months ago
- The demo, code and data of FollowRAG☆72Updated last month
- this is an implementation for the paper Improve Mathematical Reasoning in Language Models by Automated Process Supervision from google de…☆32Updated 2 months ago
- SimpleDeepSearcher: Deep Information Seeking via Web-Powered Reasoning Trajectory Synthesis☆61Updated this week
- ☆60Updated 2 weeks ago
- This repository collects research papers on learning from rewards in the context of post-training and test-time scaling of large language…☆37Updated 3 weeks ago
- [ACL-25] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.☆63Updated 7 months ago
- RM-R1: Unleashing the Reasoning Potential of Reward Models☆97Updated this week
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction☆70Updated 2 months ago
- [COLING 2025] ToolEyes: Fine-Grained Evaluation for Tool Learning Capabilities of Large Language Models in Real-world Scenarios☆68Updated 3 weeks ago
- Official github repo for AutoDetect, an automated weakness detection framework for LLMs.☆42Updated 11 months ago
- [NeurIPS 2024] Official code of $\beta$-DPO: Direct Preference Optimization with Dynamic $\beta$☆45Updated 7 months ago
- Test-time preferenece optimization (ICML 2025).☆128Updated 3 weeks ago
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learning☆98Updated last month
- [ICLR 2025] This is the code repo for our ICLR’25 paper "RAG-DDR: Optimizing Retrieval-Augmented Generation Using Differentiable Data Rew…☆38Updated 3 months ago
- [ICLR 2025] LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization☆36Updated 3 months ago
- This is the official implementation of the paper "S²R: Teaching LLMs to Self-verify and Self-correct via Reinforcement Learning"☆64Updated last month
- Code for "CREAM: Consistency Regularized Self-Rewarding Language Models", ICLR 2025.☆22Updated 3 months ago
- The code and data of DPA-RAG, accepted by WWW 2025 main conference.☆61Updated 4 months ago
- Watch Every Step! LLM Agent Learning via Iterative Step-level Process Refinement (EMNLP 2024 Main Conference)☆57Updated 7 months ago
- ☆24Updated 2 months ago