QwenLM / ParScaleLinks
Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling
☆432Updated 3 months ago
Alternatives and similar repositories for ParScale
Users that are interested in ParScale are comparing it to the libraries listed below
Sorting:
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆189Updated 2 months ago
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models☆220Updated 2 months ago
- ☆811Updated 2 months ago
- Tina: Tiny Reasoning Models via LoRA☆278Updated 2 weeks ago
- Scaling RL on advanced reasoning models☆574Updated 2 weeks ago
- TransMLA: Multi-Head Latent Attention Is All You Need☆343Updated last month
- Official codebase for "Can 1B LLM Surpass 405B LLM? Rethinking Compute-Optimal Test-Time Scaling".☆270Updated 6 months ago
- ☆198Updated 4 months ago
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆209Updated last month
- ☆292Updated 3 months ago
- A lightweight reproduction of DeepSeek-R1-Zero with indepth analysis of self-reflection behavior.☆245Updated 4 months ago
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆487Updated 6 months ago
- Simple extension on vLLM to help you speed up reasoning model without training.☆181Updated 2 months ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆346Updated 8 months ago
- Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning☆190Updated 5 months ago
- official repository for “Reinforcement Learning for Reasoning in Large Language Models with One Training Example”☆347Updated 2 weeks ago
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆833Updated 5 months ago
- Speed Always Wins: A Survey on Efficient Architectures for Large Language Models☆279Updated this week
- OpenSeek aims to unite the global open source community to drive collaborative innovation in algorithms, data and systems to develop next…☆222Updated this week
- ☆313Updated 2 months ago
- Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement fine-tuning (RFT) of large language models (…☆298Updated this week
- A framework to study AI models in Reasoning, Alignment, and use of Memory (RAM).☆277Updated this week
- ReasonFlux Series - A family of LLM post-training algorithms focusing on data selection, reinforcement learning, and inference scaling☆480Updated 3 weeks ago
- LongRoPE is a novel method that can extends the context window of pre-trained LLMs to an impressive 2048k tokens.☆241Updated last year
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆248Updated 3 months ago
- A Large-Scale, Challenging, Decontaminated, and Verifiable Mathematical Dataset for Advancing Reasoning☆244Updated 2 months ago
- Official implementation of "Fast-dLLM: Training-free Acceleration of Diffusion LLM by Enabling KV Cache and Parallel Decoding"☆386Updated this week
- MiroMind-M1 is a fully open-source series of reasoning language models built on Qwen-2.5, focused on advancing mathematical reasoning.☆219Updated 2 weeks ago
- A Comprehensive Survey on Long Context Language Modeling☆180Updated last month
- Code for the paper: "Learning to Reason without External Rewards"☆347Updated last month