Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling
☆472May 17, 2025Updated 9 months ago
Alternatives and similar repositories for ParScale
Users that are interested in ParScale are comparing it to the libraries listed below
Sorting:
- The evaluation framework for training-free sparse attention in LLMs☆121Jan 27, 2026Updated last month
- A Large-Scale, Challenging, Decontaminated, and Verifiable Mathematical Dataset for Advancing Reasoning☆282Sep 25, 2025Updated 5 months ago
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆129Jun 24, 2025Updated 8 months ago
- Revisiting Mid-training in the Era of Reinforcement Learning Scaling☆183Jul 23, 2025Updated 7 months ago
- Simple RL training for reasoning☆3,830Dec 23, 2025Updated 2 months ago
- Official Repo for Open-Reasoner-Zero☆2,087Jun 2, 2025Updated 8 months ago
- DeeperGEMM: crazy optimized version☆74May 5, 2025Updated 9 months ago
- ☆129Jun 6, 2025Updated 8 months ago
- ☆134May 29, 2025Updated 9 months ago
- Lightning-Fast RL for LLM Reasoning and Agents. Made Simple & Flexible.☆3,586Updated this week
- Understanding R1-Zero-Like Training: A Critical Perspective☆1,219Aug 27, 2025Updated 6 months ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆42Dec 29, 2025Updated 2 months ago
- AnchorAttention: Improved attention for LLMs long-context training☆214Jan 15, 2025Updated last year
- Muon is Scalable for LLM Training☆1,440Aug 3, 2025Updated 6 months ago
- Democratizing Reinforcement Learning for LLMs☆5,167Updated this week
- Technical report of Kimina-Prover Preview.☆361Jul 10, 2025Updated 7 months ago
- Seed-Coder is a family of lightweight open-source code LLMs comprising base, instruct and reasoning models, developed by ByteDance Seed.☆744Jun 6, 2025Updated 8 months ago
- ☆813Jun 9, 2025Updated 8 months ago
- ☆38Aug 7, 2025Updated 6 months ago
- Distributed Compiler based on Triton for Parallel Systems☆1,361Feb 13, 2026Updated 2 weeks ago
- Code for "[COLM'25] RepoST: Scalable Repository-Level Coding Environment Construction with Sandbox Testing"☆23Mar 18, 2025Updated 11 months ago
- An Open Large Reasoning Model for Real-World Solutions☆1,532Feb 13, 2026Updated 2 weeks ago
- Reproducing R1 for Code with Reliable Rewards☆290May 5, 2025Updated 9 months ago
- An Open-source RL System from ByteDance Seed and Tsinghua AIR☆1,739May 11, 2025Updated 9 months ago
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆969Feb 5, 2026Updated 3 weeks ago
- The official repo of MiniMax-Text-01 and MiniMax-VL-01, large-language-model & vision-language-model based on Linear Attention☆3,347Jul 7, 2025Updated 7 months ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆372Dec 12, 2024Updated last year
- Efficient Triton Kernels for LLM Training☆6,162Updated this week
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆459Apr 18, 2024Updated last year
- Scaling RL on advanced reasoning models☆665Oct 20, 2025Updated 4 months ago
- SkyRL: A Modular Full-stack RL Library for LLMs☆1,628Updated this week
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆269Jul 6, 2025Updated 7 months ago
- General Reasoner: Advancing LLM Reasoning Across All Domains [NeurIPS25]☆221Nov 27, 2025Updated 3 months ago
- Klear-Reasoner: Advancing Reasoning Capability via Gradient-Preserving Clipping Policy Optimization☆81Dec 25, 2025Updated 2 months ago
- A framework to study AI models in Reasoning, Alignment, and use of Memory (RAM).☆344Dec 16, 2025Updated 2 months ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆446Oct 16, 2024Updated last year
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Training☆650Updated this week
- verl: Volcano Engine Reinforcement Learning for LLMs☆19,339Updated this week
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆524Feb 10, 2025Updated last year