RyanLiu112 / compute-optimal-ttsLinks
Official codebase for "Can 1B LLM Surpass 405B LLM? Rethinking Compute-Optimal Test-Time Scaling".
☆277Updated 10 months ago
Alternatives and similar repositories for compute-optimal-tts
Users that are interested in compute-optimal-tts are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2025 Spotlight] ReasonFlux (long-CoT), ReasonFlux-PRM (process reward model) and ReasonFlux-Coder (code generation)☆510Updated 2 months ago
- Official repo for paper: "Reinforcement Learning for Reasoning in Small LLMs: What Works and What Doesn't"☆270Updated 2 months ago
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆258Updated 7 months ago
- ☆328Updated 6 months ago
- Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning☆189Updated 9 months ago
- Resources for our paper: "Agent-R: Training Language Model Agents to Reflect via Iterative Self-Training"☆165Updated 2 months ago
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆222Updated 5 months ago
- ☆321Updated 7 months ago
- repo for paper https://arxiv.org/abs/2504.13837☆301Updated last week
- ICML2025: Forest-of-Thought: Scaling Test-Time Compute for Enhancing LLM Reasoning☆51Updated 7 months ago
- Towards a Unified View of Large Language Model Post-Training☆195Updated 3 months ago
- Tina: Tiny Reasoning Models via LoRA☆310Updated 3 months ago
- Chain-of-Agents: End-to-End Agent Foundation Models via Multi-Agent Distillation and Agentic RL.☆508Updated 3 months ago
- AN O1 REPLICATION FOR CODING☆336Updated last year
- [NeurIPS 2025] Reinforcement Learning for Reasoning in Large Language Models with One Training Example☆385Updated last month
- Official Repository of "Learning to Reason under Off-Policy Guidance"☆392Updated 2 months ago
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆463Updated 7 months ago
- Code for the paper: "Learning to Reason without External Rewards"☆383Updated 5 months ago
- [Preprint] On the Generalization of SFT: A Reinforcement Learning Perspective with Reward Rectification.☆513Updated last month
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆254Updated 7 months ago
- ☆346Updated 4 months ago
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models☆226Updated last month
- 🔧Tool-Star: Empowering LLM-brained Multi-Tool Reasoner via Reinforcement Learning☆297Updated 2 months ago
- ☆249Updated 4 months ago
- 📖 This is a repository for organizing papers, codes, and other resources related to Latent Reasoning.☆317Updated last month
- Rethinking RL Scaling for Vision Language Models: A Transparent, From-Scratch Framework and Comprehensive Evaluation Scheme☆146Updated 8 months ago
- [NeurIPS 2025] TTRL: Test-Time Reinforcement Learning☆935Updated 3 months ago
- Pre-trained, Scalable, High-performance Reward Models via Policy Discriminative Learning.☆163Updated 3 months ago
- A series of technical report on Slow Thinking with LLM☆753Updated 4 months ago
- CPPO: Accelerating the Training of Group Relative Policy Optimization-Based Reasoning Models (NeurIPS 2025)☆169Updated last month