RyanLiu112 / compute-optimal-ttsLinks
Official codebase for "Can 1B LLM Surpass 405B LLM? Rethinking Compute-Optimal Test-Time Scaling".
☆268Updated 4 months ago
Alternatives and similar repositories for compute-optimal-tts
Users that are interested in compute-optimal-tts are comparing it to the libraries listed below
Sorting:
- ReasonFlux Series - A family of LLM post-training algorithms focusing on data selection, reinforcement learning, and inference scaling☆447Updated last week
- Official repo for paper: "Reinforcement Learning for Reasoning in Small LLMs: What Works and What Doesn't"☆244Updated 2 months ago
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆226Updated 2 months ago
- Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning☆188Updated 3 months ago
- AN O1 REPLICATION FOR CODING☆335Updated 7 months ago
- ☆303Updated last month
- ☆142Updated 2 months ago
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models☆216Updated 2 weeks ago
- official repository for “Reinforcement Learning for Reasoning in Large Language Models with One Training Example”☆323Updated this week
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆195Updated last week
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆412Updated last month
- Code for the paper: "Learning to Reason without External Rewards"☆319Updated this week
- A MemAgent framework that can be extrapolated to 3.5M, along with a training framework for RL training of any agent workflow.☆157Updated last week
- Official Repository of "Learning to Reason under Off-Policy Guidance"☆249Updated last month
- Rethinking RL Scaling for Vision Language Models: A Transparent, From-Scratch Framework and Comprehensive Evaluation Scheme☆133Updated 3 months ago
- Resources for our paper: "Agent-R: Training Language Model Agents to Reflect via Iterative Self-Training"☆150Updated last month
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆178Updated 3 weeks ago
- Tina: Tiny Reasoning Models via LoRA☆266Updated last month
- ☆238Updated last month
- OpenRFT: Adapting Reasoning Foundation Model for Domain-specific Tasks with Reinforcement Fine-Tuning☆145Updated 6 months ago
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆223Updated 2 months ago
- Tool-Star: Empowering LLM-brained Multi-Tool Reasoner via Reinforcement Learning☆197Updated 2 weeks ago
- CPPO: Accelerating the Training of Group Relative Policy Optimization-Based Reasoning Models☆143Updated last month
- ☆205Updated 4 months ago
- ☆585Updated 3 months ago
- ☆318Updated last month
- ☆266Updated last month
- A series of technical report on Slow Thinking with LLM☆708Updated last month
- 🚀ReVisual-R1 is a 7B open-source multimodal language model that follows a three-stage curriculum—cold-start pre-training, multimodal rei…☆159Updated last week
- A lightweight reproduction of DeepSeek-R1-Zero with indepth analysis of self-reflection behavior.☆244Updated 3 months ago