liziniu / cold_start_rlLinks
Code for Blog Post: Can Better Cold-Start Strategies Improve RL Training for LLMs?
☆19Updated 10 months ago
Alternatives and similar repositories for cold_start_rl
Users that are interested in cold_start_rl are comparing it to the libraries listed below
Sorting:
- Long Context Extension and Generalization in LLMs☆62Updated last year
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆41Updated 3 weeks ago
- ☆19Updated last year
- Codebase for Instruction Following without Instruction Tuning☆36Updated last year
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆75Updated 8 months ago
- [NeurIPS 2024] Fast Best-of-N Decoding via Speculative Rejection☆54Updated last year
- ☆64Updated last year
- Klear-Reasoner: Advancing Reasoning Capability via Gradient-Preserving Clipping Policy Optimization☆81Updated 3 weeks ago
- [ACL 2025] Are Your LLMs Capable of Stable Reasoning?☆32Updated 5 months ago
- Code for ICML 25 paper "Metadata Conditioning Accelerates Language Model Pre-training (MeCo)"☆49Updated 6 months ago
- ☆80Updated 10 months ago
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆89Updated last year
- ☆85Updated 2 months ago
- ☆71Updated last year
- LongAttn :Selecting Long-context Training Data via Token-level Attention☆15Updated 6 months ago
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆179Updated 6 months ago
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learning☆120Updated 8 months ago
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆57Updated 11 months ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆78Updated last year
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆109Updated 3 months ago
- [ACL 2025] An inference-time decoding strategy with adaptive foresight sampling☆106Updated 8 months ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated 2 years ago
- [ACL 2025] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLM…☆68Updated last year
- The official repository of the Omni-MATH benchmark.☆93Updated last year
- [EMNLP'25 Industry] Repo for "Z1: Efficient Test-time Scaling with Code"☆68Updated 9 months ago
- [ICLR 2025] LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization☆43Updated 10 months ago
- RENT (Reinforcement Learning via Entropy Minimization) is an unsupervised method for training reasoning LLMs.☆41Updated 2 months ago
- Large Language Models Can Self-Improve in Long-context Reasoning☆72Updated last year
- Replicating O1 inference-time scaling laws☆90Updated last year
- [NeurIPS 2024] Can LLMs Learn by Teaching for Better Reasoning? A Preliminary Study☆59Updated last year