sail-sg / SkyLadderLinks
The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling
☆35Updated last week
Alternatives and similar repositories for SkyLadder
Users that are interested in SkyLadder are comparing it to the libraries listed below
Sorting:
- Long Context Extension and Generalization in LLMs☆62Updated last year
- ☆86Updated last year
- Codebase for Instruction Following without Instruction Tuning☆36Updated last year
- ☆55Updated 4 months ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated last year
- ☆19Updated 9 months ago
- Official implementation for DenseMixer: Improving MoE Post-Training with Precise Router Gradient☆58Updated 2 months ago
- Exploration of automated dataset selection approaches at large scales.☆47Updated 7 months ago
- A repository for research on medium sized language models.☆78Updated last year
- ☆33Updated 9 months ago
- ☆85Updated 9 months ago
- From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients. Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu,…☆51Updated 6 months ago
- Repository for the Q-Filters method (https://arxiv.org/pdf/2503.02812)☆35Updated 7 months ago
- [NeurIPS 2024] Low rank memory efficient optimizer without SVD☆30Updated 3 months ago
- ☆62Updated 3 months ago
- Code for Blog Post: Can Better Cold-Start Strategies Improve RL Training for LLMs?☆18Updated 7 months ago
- [ACL 2025] Are Your LLMs Capable of Stable Reasoning?☆30Updated 2 months ago
- The repository contains code for Adaptive Data Optimization☆26Updated 10 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆102Updated 2 weeks ago
- ☆20Updated 2 months ago
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆88Updated last year
- Reinforcing General Reasoning without Verifiers☆91Updated 4 months ago
- [NeurIPS 2024] Can LLMs Learn by Teaching for Better Reasoning? A Preliminary Study☆55Updated 11 months ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆44Updated 6 months ago
- Code for "Language Models Can Learn from Verbal Feedback Without Scalar Rewards"☆47Updated 3 weeks ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆84Updated 11 months ago
- ☆98Updated last month
- ☆107Updated last year
- ☆50Updated 8 months ago
- Code for RATIONALYST: Pre-training Process-Supervision for Improving Reasoning https://arxiv.org/pdf/2410.01044☆35Updated last year