RyanLiu112 / compute-optimal-ttsLinks
Official codebase for "Can 1B LLM Surpass 405B LLM? Rethinking Compute-Optimal Test-Time Scaling".
☆271Updated 7 months ago
Alternatives and similar repositories for compute-optimal-tts
Users that are interested in compute-optimal-tts are comparing it to the libraries listed below
Sorting:
- Official repo for paper: "Reinforcement Learning for Reasoning in Small LLMs: What Works and What Doesn't"☆261Updated 4 months ago
- ReasonFlux Series - ReasonFlux, ReasonFlux-PRM and ReasonFlux-Coder☆485Updated last month
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆253Updated 4 months ago
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆443Updated 4 months ago
- ☆315Updated 3 months ago
- Tina: Tiny Reasoning Models via LoRA☆282Updated last month
- official repository for “Reinforcement Learning for Reasoning in Large Language Models with One Training Example”☆357Updated last week
- Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning☆190Updated 5 months ago
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆212Updated last month
- Official Repository of "Learning to Reason under Off-Policy Guidance"☆301Updated last week
- TTRL: Test-Time Reinforcement Learning☆806Updated last month
- Code for the paper: "Learning to Reason without External Rewards"☆353Updated 2 months ago
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆188Updated 2 months ago
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆244Updated 4 months ago
- Pre-trained, Scalable, High-performance Reward Models via Policy Discriminative Learning.☆152Updated last week
- 📖 This is a repository for organizing papers, codes, and other resources related to Latent Reasoning.☆206Updated last week
- Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement fine-tuning (RFT) of large language models (…☆341Updated this week
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models☆220Updated this week
- ☆287Updated 3 months ago
- ☆331Updated last month
- Resources for our paper: "Agent-R: Training Language Model Agents to Reflect via Iterative Self-Training"☆160Updated 3 months ago
- AN O1 REPLICATION FOR CODING☆334Updated 9 months ago
- Rethinking RL Scaling for Vision Language Models: A Transparent, From-Scratch Framework and Comprehensive Evaluation Scheme☆141Updated 5 months ago
- ☆205Updated last month
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆171Updated 2 months ago
- Chain-of-Agents: End-to-End Agent Foundation Models via Multi-Agent Distillation and Agentic RL.☆406Updated last week
- MiroThinker is open-source agentic models trained for deep research and complex tool use scenarios.☆314Updated this week
- ☆209Updated 6 months ago
- ICML2025: Forest-of-Thought: Scaling Test-Time Compute for Enhancing LLM Reasoning☆48Updated 4 months ago
- OpenRFT: Adapting Reasoning Foundation Model for Domain-specific Tasks with Reinforcement Fine-Tuning☆149Updated 8 months ago