QwenLM / ParScaleLinks
Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling
☆460Updated 6 months ago
Alternatives and similar repositories for ParScale
Users that are interested in ParScale are comparing it to the libraries listed below
Sorting:
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆197Updated last week
- Scaling RL on advanced reasoning models☆641Updated last month
- ☆208Updated last month
- [NeurIPS 2025] Simple extension on vLLM to help you speed up reasoning model without training.☆212Updated 6 months ago
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models☆224Updated last month
- TransMLA: Multi-Head Latent Attention Is All You Need (NeurIPS 2025 Spotlight)☆415Updated 2 months ago
- Implementation for FP8/INT8 Rollout for RL training without performence drop.☆279Updated last month
- A Comprehensive Survey on Long Context Language Modeling☆213Updated 2 weeks ago
- LongRoPE is a novel method that can extends the context window of pre-trained LLMs to an impressive 2048k tokens.☆275Updated last month
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆510Updated 10 months ago
- ☆818Updated 6 months ago
- A lightweight reproduction of DeepSeek-R1-Zero with indepth analysis of self-reflection behavior.☆248Updated 7 months ago
- ☆85Updated 8 months ago
- Official codebase for "Can 1B LLM Surpass 405B LLM? Rethinking Compute-Optimal Test-Time Scaling".☆277Updated 9 months ago
- Tina: Tiny Reasoning Models via LoRA☆309Updated 2 months ago
- ☆328Updated 6 months ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆359Updated last year
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆221Updated 4 months ago
- ☆300Updated 6 months ago
- MiroMind-M1 is a fully open-source series of reasoning language models built on Qwen-2.5, focused on advancing mathematical reasoning.☆245Updated 4 months ago
- Official implementation of "Fast-dLLM: Training-free Acceleration of Diffusion LLM by Enabling KV Cache and Parallel Decoding"☆725Updated 2 weeks ago
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆257Updated 6 months ago
- A framework to study AI models in Reasoning, Alignment, and use of Memory (RAM).☆312Updated last week
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆443Updated last year
- Super-Efficient RLHF Training of LLMs with Parameter Reallocation☆327Updated 7 months ago
- ☆344Updated last week
- Speed Always Wins: A Survey on Efficient Architectures for Large Language Models☆370Updated last month
- ☆439Updated 4 months ago
- Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning☆190Updated 8 months ago
- General Reasoner: Advancing LLM Reasoning Across All Domains [NeurIPS25]☆209Updated 2 weeks ago