QwenLM / ParScaleLinks
Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling
☆465Updated 7 months ago
Alternatives and similar repositories for ParScale
Users that are interested in ParScale are comparing it to the libraries listed below
Sorting:
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆198Updated 3 weeks ago
- Scaling RL on advanced reasoning models☆650Updated 2 months ago
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models☆227Updated last month
- ☆472Updated 2 weeks ago
- ☆208Updated 2 months ago
- ☆817Updated 6 months ago
- [NeurIPS 2025] Simple extension on vLLM to help you speed up reasoning model without training.☆215Updated 7 months ago
- LongRoPE is a novel method that can extends the context window of pre-trained LLMs to an impressive 2048k tokens.☆276Updated 2 months ago
- Official codebase for "Can 1B LLM Surpass 405B LLM? Rethinking Compute-Optimal Test-Time Scaling".☆278Updated 10 months ago
- Implementation for FP8/INT8 Rollout for RL training without performence drop.☆282Updated last month
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆515Updated 10 months ago
- ☆123Updated last week
- A lightweight reproduction of DeepSeek-R1-Zero with indepth analysis of self-reflection behavior.☆249Updated 8 months ago
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆257Updated 7 months ago
- ☆329Updated 7 months ago
- Tina: Tiny Reasoning Models via LoRA☆310Updated 3 months ago
- A Comprehensive Survey on Long Context Language Modeling☆216Updated last month
- A framework to study AI models in Reasoning, Alignment, and use of Memory (RAM).☆333Updated 2 weeks ago
- ☆84Updated 8 months ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆365Updated last year
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆222Updated 5 months ago
- A Large-Scale, Challenging, Decontaminated, and Verifiable Mathematical Dataset for Advancing Reasoning☆280Updated 3 months ago
- ☆440Updated 4 months ago
- [NeurIPS 2025] Reinforcement Learning for Reasoning in Large Language Models with One Training Example☆389Updated last month
- TransMLA: Multi-Head Latent Attention Is All You Need (NeurIPS 2025 Spotlight)☆422Updated 3 months ago
- [NeurIPS 2025 Spotlight] ReasonFlux (long-CoT), ReasonFlux-PRM (process reward model) and ReasonFlux-Coder (code generation)☆512Updated 3 months ago
- Super-Efficient RLHF Training of LLMs with Parameter Reallocation☆330Updated 8 months ago
- Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning☆190Updated 9 months ago
- [ICML 2024] CLLMs: Consistency Large Language Models☆409Updated last year
- ☆177Updated 8 months ago