QwenLM / ParScaleLinks
Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling
☆451Updated 6 months ago
Alternatives and similar repositories for ParScale
Users that are interested in ParScale are comparing it to the libraries listed below
Sorting:
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆194Updated last month
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models☆223Updated 2 weeks ago
- Scaling RL on advanced reasoning models☆632Updated last month
- ☆817Updated 5 months ago
- Tina: Tiny Reasoning Models via LoRA☆305Updated last month
- Official codebase for "Can 1B LLM Surpass 405B LLM? Rethinking Compute-Optimal Test-Time Scaling".☆274Updated 9 months ago
- Implementation for FP8/INT8 Rollout for RL training without performence drop.☆269Updated 2 weeks ago
- ☆205Updated 3 weeks ago
- [NeurIPS 2025] Simple extension on vLLM to help you speed up reasoning model without training.☆206Updated 5 months ago
- ☆301Updated 5 months ago
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆222Updated 3 months ago
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆257Updated 6 months ago
- A Large-Scale, Challenging, Decontaminated, and Verifiable Mathematical Dataset for Advancing Reasoning☆269Updated last month
- Official implementation of "Fast-dLLM: Training-free Acceleration of Diffusion LLM by Enabling KV Cache and Parallel Decoding"☆676Updated 3 weeks ago
- TransMLA: Multi-Head Latent Attention Is All You Need (NeurIPS 2025 Spotlight)☆410Updated last month
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆356Updated 11 months ago
- A lightweight reproduction of DeepSeek-R1-Zero with indepth analysis of self-reflection behavior.☆248Updated 7 months ago
- ☆326Updated 5 months ago
- The code of our paper "InfLLM: Unveiling the Intrinsic Capacity of LLMs for Understanding Extremely Long Sequences with Training-Free Mem…☆388Updated last year
- [NeurIPS 2025 Spotlight] ReasonFlux (long-CoT), ReasonFlux-PRM (process reward model) and ReasonFlux-Coder (code generation)☆501Updated last month
- LongRoPE is a novel method that can extends the context window of pre-trained LLMs to an impressive 2048k tokens.☆271Updated 3 weeks ago
- MiroMind-M1 is a fully open-source series of reasoning language models built on Qwen-2.5, focused on advancing mathematical reasoning.☆241Updated 3 months ago
- [NeurIPS 2025] Reinforcement Learning for Reasoning in Large Language Models with One Training Example☆376Updated last month
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆501Updated 9 months ago
- ☆85Updated 7 months ago
- A Comprehensive Survey on Long Context Language Modeling☆203Updated 4 months ago
- ☆172Updated 6 months ago
- Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning☆190Updated 8 months ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆441Updated last year
- [NeurIPS 2025] The official repo of SynLogic: Synthesizing Verifiable Reasoning Data at Scale for Learning Logical Reasoning and Beyond☆186Updated 4 months ago