Optimization-AI / DisCOLinks
Discriminative Constrained Optimization for Reinforcing Large Reasoning Models
☆47Updated last month
Alternatives and similar repositories for DisCO
Users that are interested in DisCO are comparing it to the libraries listed below
Sorting:
- A Sober Look at Language Model Reasoning☆89Updated last month
- Repo of paper "Free Process Rewards without Process Labels"☆168Updated 9 months ago
- TreeRL: LLM Reinforcement Learning with On-Policy Tree Search in ACL'25☆84Updated 6 months ago
- This is the official implementation of the paper "S²R: Teaching LLMs to Self-verify and Self-correct via Reinforcement Learning"☆72Updated 7 months ago
- Optimizing Anytime Reasoning via Budget Relative Policy Optimization☆50Updated 5 months ago
- Code for "Reasoning to Learn from Latent Thoughts"☆123Updated 8 months ago
- A repo for open research on building large reasoning models☆121Updated last week
- A curated list of awesome LLM Inference-Time Self-Improvement (ITSI, pronounced "itsy") papers from our recent survey: A Survey on Large …☆97Updated 11 months ago
- ☆346Updated 4 months ago
- [NeurIPS 2025] Implementation for the paper "The Surprising Effectiveness of Negative Reinforcement in LLM Reasoning"☆138Updated last month
- [ICLR 25 Oral] RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Style☆72Updated 5 months ago
- The official repository of NeurIPS'25 paper "Ada-R1: From Long-Cot to Hybrid-CoT via Bi-Level Adaptive Reasoning Optimization"☆20Updated last month
- Research Code for preprint "Optimizing Test-Time Compute via Meta Reinforcement Finetuning".☆115Updated 4 months ago
- [ICML 2025] M-STAR (Multimodal Self-Evolving TrAining for Reasoning) Project. Diving into Self-Evolving Training for Multimodal Reasoning☆69Updated 5 months ago
- Code for paper "Unraveling Cross-Modality Knowledge Conflicts in Large Vision-Language Models."☆50Updated last year
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆134Updated 8 months ago
- official implementation of paper "Process Reward Model with Q-value Rankings"☆65Updated 10 months ago
- Code for "Language Models Can Learn from Verbal Feedback Without Scalar Rewards"☆55Updated 2 months ago
- Official Repository of LatentSeek☆70Updated 6 months ago
- ☆45Updated 2 months ago
- Laser: Learn to Reason Efficiently with Adaptive Length-based Reward Shaping☆61Updated 6 months ago
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆93Updated last year
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction☆84Updated 8 months ago
- Reinforcing General Reasoning without Verifiers☆92Updated 5 months ago
- ☆189Updated 7 months ago
- AdaRFT: Efficient Reinforcement Finetuning via Adaptive Curriculum Learning☆49Updated 6 months ago
- [AI4MATH@ICML2025] Do Not Let Low-Probability Tokens Over-Dominate in RL for LLMs☆41Updated 7 months ago
- [ACL' 25] The official code repository for PRMBench: A Fine-grained and Challenging Benchmark for Process-Level Reward Models.☆85Updated 10 months ago
- RL with Experience Replay☆51Updated 4 months ago
- ☆135Updated 9 months ago