ZJU-REAL / HBPOLinks
☆29Updated last month
Alternatives and similar repositories for HBPO
Users that are interested in HBPO are comparing it to the libraries listed below
Sorting:
- ☆36Updated 2 weeks ago
- A Unified Framework for High-Performance and Extensible LLM Steering☆42Updated this week
- ☆67Updated 3 months ago
- Official repository for paper: O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruning☆90Updated 7 months ago
- Source code for our paper: "ARIA: Training Language Agents with Intention-Driven Reward Aggregation".☆22Updated last month
- A curated list of awesome LLM Inference-Time Self-Improvement (ITSI, pronounced "itsy") papers from our recent survey: A Survey on Large …☆95Updated 9 months ago
- [NeurIPS'25 Spotlight] ARM: Adaptive Reasoning Model☆53Updated 2 months ago
- 🔧Tool-Star: Empowering LLM-brained Multi-Tool Reasoner via Reinforcement Learning☆262Updated 3 weeks ago
- Implementation for the research paper "Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision".☆56Updated 10 months ago
- TreeRL: LLM Reinforcement Learning with On-Policy Tree Search in ACL'25☆68Updated 3 months ago
- Official codebase for "GenPRM: Scaling Test-Time Compute of Process Reward Models via Generative Reasoning".☆81Updated 4 months ago
- This is the official implementation of the paper "S²R: Teaching LLMs to Self-verify and Self-correct via Reinforcement Learning"☆69Updated 5 months ago
- End-to-End Reinforcement Learning for Multi-Turn Tool-Integrated Reasoning☆292Updated last week
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆86Updated 7 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆128Updated 6 months ago
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction☆82Updated 6 months ago
- ☆333Updated 2 months ago
- xVerify: Efficient Answer Verifier for Reasoning Model Evaluations☆130Updated 5 months ago
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆256Updated 4 months ago
- Mind the Gap: Bridging Thought Leap for Improved CoT Tuning https://arxiv.org/abs/2505.14684☆40Updated last week
- ☆37Updated last month
- [ACL 2025] A Generalizable and Purely Unsupervised Self-Training Framework☆71Updated 4 months ago
- ☆121Updated last month
- ☆154Updated 4 months ago
- [EMNLP 2025] LightThinker: Thinking Step-by-Step Compression☆104Updated 5 months ago
- R1-Searcher++: Incentivizing the Dynamic Knowledge Acquisition of LLMs via Reinforcement Learning☆62Updated 4 months ago
- Research Code for preprint "Optimizing Test-Time Compute via Meta Reinforcement Finetuning".☆107Updated 2 months ago
- Segment Policy Optimization: Effective Segment-Level Credit Assignment in RL for Large Language Models☆35Updated 2 weeks ago
- Official Implementation of ARPO: End-to-End Policy Optimization for GUI Agents with Experience Replay☆127Updated 4 months ago
- [ACL'25] We propose a novel fine-tuning method, Separate Memory and Reasoning, which combines prompt tuning with LoRA.☆76Updated 2 weeks ago