Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling
☆476May 17, 2025Updated 10 months ago
Alternatives and similar repositories for ParScale
Users that are interested in ParScale are comparing it to the libraries listed below
Sorting:
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆131Jun 24, 2025Updated 8 months ago
- The evaluation framework for training-free sparse attention in LLMs☆122Jan 27, 2026Updated last month
- Official Repo for Open-Reasoner-Zero☆2,086Jun 2, 2025Updated 9 months ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆42Dec 29, 2025Updated 2 months ago
- DeeperGEMM: crazy optimized version☆75May 5, 2025Updated 10 months ago
- A Large-Scale, Challenging, Decontaminated, and Verifiable Mathematical Dataset for Advancing Reasoning☆287Sep 25, 2025Updated 5 months ago
- Simple RL training for reasoning☆3,841Dec 23, 2025Updated 2 months ago
- Muon is Scalable for LLM Training☆1,446Aug 3, 2025Updated 7 months ago
- Understanding R1-Zero-Like Training: A Critical Perspective☆1,232Aug 27, 2025Updated 6 months ago
- ☆38Aug 7, 2025Updated 7 months ago
- Reproducing R1 for Code with Reliable Rewards☆297May 5, 2025Updated 10 months ago
- Democratizing Reinforcement Learning for LLMs☆5,259Updated this week
- Distributed Compiler based on Triton for Parallel Systems☆1,386Mar 11, 2026Updated last week
- Seed-Coder is a family of lightweight open-source code LLMs comprising base, instruct and reasoning models, developed by ByteDance Seed.☆745Jun 6, 2025Updated 9 months ago
- ☆136May 29, 2025Updated 9 months ago
- ☆810Jun 9, 2025Updated 9 months ago
- Lightning-Fast RL for LLM Reasoning and Agents. Made Simple & Flexible.☆4,855Updated this week
- Technical report of Kimina-Prover Preview.☆363Jul 10, 2025Updated 8 months ago
- An Open Large Reasoning Model for Real-World Solutions☆1,539Feb 13, 2026Updated last month
- Code for "[COLM'25] RepoST: Scalable Repository-Level Coding Environment Construction with Sandbox Testing"☆23Mar 18, 2025Updated last year
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆222Aug 19, 2024Updated last year
- An Open-source RL System from ByteDance Seed and Tsinghua AIR☆1,762May 11, 2025Updated 10 months ago
- Yet another frontend for LLM, written using .NET and WinUI 3☆10Sep 14, 2025Updated 6 months ago
- Revisiting Mid-training in the Era of Reinforcement Learning Scaling☆186Jul 23, 2025Updated 7 months ago
- ☆133Jun 6, 2025Updated 9 months ago
- Efficient Triton Kernels for LLM Training☆6,216Updated this week
- The official repo of MiniMax-Text-01 and MiniMax-VL-01, large-language-model & vision-language-model based on Linear Attention☆3,362Jul 7, 2025Updated 8 months ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆449Oct 16, 2024Updated last year
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆374Dec 12, 2024Updated last year
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆977Feb 5, 2026Updated last month
- verl: Volcano Engine Reinforcement Learning for LLMs☆19,919Updated this week
- AnchorAttention: Improved attention for LLMs long-context training☆216Jan 15, 2025Updated last year
- A framework to study AI models in Reasoning, Alignment, and use of Memory (RAM).☆344Dec 16, 2025Updated 3 months ago
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learning☆123May 6, 2025Updated 10 months ago
- ☆1,113Jan 10, 2026Updated 2 months ago
- A series of technical report on Slow Thinking with LLM☆761Aug 13, 2025Updated 7 months ago
- SkyRL: A Modular Full-stack RL Library for LLMs☆1,699Updated this week
- Suri: Multi-constraint instruction following for long-form text generation (EMNLP’24)☆27Oct 3, 2025Updated 5 months ago
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Training☆676Updated this week