Official codebase for "Can 1B LLM Surpass 405B LLM? Rethinking Compute-Optimal Test-Time Scaling".
☆284Feb 19, 2025Updated last year
Alternatives and similar repositories for compute-optimal-tts
Users that are interested in compute-optimal-tts are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- OpenR: An Open Source Framework for Advanced Reasoning with Large Language Models☆1,839Jan 17, 2025Updated last year
- [AAAI 2026] Official codebase for "GenPRM: Scaling Test-Time Compute of Process Reward Models via Generative Reasoning".☆94Nov 8, 2025Updated 4 months ago
- A comprehensive collection of process reward models.☆141Oct 4, 2025Updated 5 months ago
- ☆28Oct 2, 2025Updated 5 months ago
- ☆52Mar 17, 2025Updated last year
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- ☆22Oct 22, 2024Updated last year
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆262May 14, 2025Updated 10 months ago
- ☆969Jan 23, 2025Updated last year
- Simple RL training for reasoning☆3,841Dec 23, 2025Updated 3 months ago
- [NeurIPS 2025] TTRL: Test-Time Reinforcement Learning☆1,025Mar 11, 2026Updated 2 weeks ago
- ☆27Nov 25, 2025Updated 4 months ago
- ☆1,402Sep 12, 2025Updated 6 months ago
- ☆33Oct 13, 2025Updated 5 months ago
- ☆52Feb 12, 2025Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- LongAttn :Selecting Long-context Training Data via Token-level Attention☆15Jul 16, 2025Updated 8 months ago
- [TMLR] Process Reward Models That Think☆82Nov 29, 2025Updated 3 months ago
- [NeurIPS 2025 Spotlight] LLM post-training suite — featuring ReasonFlux, ReasonFlux-PRM, and ReasonFlux-Coder.☆524Sep 27, 2025Updated 6 months ago
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆264May 5, 2025Updated 10 months ago
- [NeurIPS 2025] Simple extension on vLLM to help you speed up reasoning model without training.☆224May 31, 2025Updated 9 months ago
- s1: Simple test-time scaling☆6,646Jun 25, 2025Updated 9 months ago
- ☆150Mar 12, 2025Updated last year
- Official code for the paper, "Stop Summation: Min-Form Credit Assignment Is All Process Reward Model Needs for Reasoning"☆160Oct 23, 2025Updated 5 months ago
- Sky-T1: Train your own O1 preview model within $450☆3,372Jul 12, 2025Updated 8 months ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- [ACL 2024] Instruct Once, Chat Consistently in Multiple Rounds: An Efficient Tuning Framework for Dialogue☆26Oct 18, 2025Updated 5 months ago
- ☆28May 24, 2025Updated 10 months ago
- Scalable RL solution for advanced reasoning of language models☆1,821Mar 18, 2025Updated last year
- Reproduce R1 Zero on Logic Puzzle☆2,441Mar 20, 2025Updated last year
- Recipes to train the self-rewarding reasoning LLMs.☆231Mar 2, 2025Updated last year
- [COLM 2025] LIMO: Less is More for Reasoning☆1,065Jul 30, 2025Updated 7 months ago
- Repo of paper "Free Process Rewards without Process Labels"☆170Mar 14, 2025Updated last year
- An Open Large Reasoning Model for Real-World Solutions☆1,539Feb 13, 2026Updated last month