sastpg / RFTTLinks
RFTT: Reasoning with Reinforced Functional Token Tuning
☆27Updated 2 weeks ago
Alternatives and similar repositories for RFTT
Users that are interested in RFTT are comparing it to the libraries listed below
Sorting:
- Implementation for the research paper "Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision".☆54Updated 6 months ago
- The Entropy Mechanism of Reinforcement Learning for Large Language Model Reasoning.☆191Updated last week
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆73Updated 4 months ago
- [ACL'25] We propose a novel fine-tuning method, Separate Memory and Reasoning, which combines prompt tuning with LoRA.☆65Updated last month
- Official codebase for "GenPRM: Scaling Test-Time Compute of Process Reward Models via Generative Reasoning".☆75Updated 3 weeks ago
- A comrephensive collection of learning from rewards in the post-training and test-time scaling of LLMs, with a focus on both reward model…☆47Updated 2 weeks ago
- ☆64Updated 3 weeks ago
- This is the official implementation of the paper "S²R: Teaching LLMs to Self-verify and Self-correct via Reinforcement Learning"☆65Updated 2 months ago
- [ICML'25] Our study systematically investigates massive values in LLMs' attention mechanisms. First, we observe massive values are concen…☆73Updated last week
- Missing Premise exacerbates Overthinking: Are Reasoning Models losing Critical Thinking Skill?☆29Updated 3 weeks ago
- Code for "CREAM: Consistency Regularized Self-Rewarding Language Models", ICLR 2025.☆22Updated 4 months ago
- ☆46Updated 7 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆124Updated 3 months ago
- A Sober Look at Language Model Reasoning☆74Updated last week
- Source code for our paper: "ARIA: Training Language Agents with Intention-Driven Reward Aggregation".☆18Updated last week
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction☆72Updated 3 months ago
- Official code for the paper, "Stop Summation: Min-Form Credit Assignment Is All Process Reward Model Needs for Reasoning"☆125Updated last week
- Research Code for preprint "Optimizing Test-Time Compute via Meta Reinforcement Finetuning".☆94Updated 3 months ago
- RM-R1: Unleashing the Reasoning Potential of Reward Models☆111Updated this week
- [ICLR 2025] Benchmarking Agentic Workflow Generation☆100Updated 4 months ago
- A comprehensive collection of process reward models.☆92Updated 2 weeks ago
- [NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct☆177Updated 5 months ago
- xVerify: Efficient Answer Verifier for Reasoning Model Evaluations☆113Updated 2 months ago
- Tool-Star: Empowering LLM-brained Multi-Tool Reasoner via Reinforcement Learning☆170Updated last week
- Think or Not? Selective Reasoning via Reinforcement Learning for Vision-Language Models☆36Updated last week
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆63Updated 6 months ago
- Optimizing Anytime Reasoning via Budget Relative Policy Optimization☆38Updated last month
- Official Implementation for EMNLP 2024 (main) "AgentReview: Exploring Academic Peer Review with LLM Agent."☆70Updated 7 months ago
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆141Updated 4 months ago
- ☆139Updated last month