MoonshotAI / MoonlightLinks
Muon is Scalable for LLM Training
β1,093Updated 3 months ago
Alternatives and similar repositories for Moonlight
Users that are interested in Moonlight are comparing it to the libraries listed below
Sorting:
- π³ Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"β720Updated 3 months ago
- MoBA: Mixture of Block Attention for Long-Context LLMsβ1,817Updated 3 months ago
- β796Updated last month
- Dream 7B, a large diffusion language modelβ816Updated 3 weeks ago
- OLMoE: Open Mixture-of-Experts Language Modelsβ798Updated 3 months ago
- Muon is an optimizer for hidden layers in neural networksβ988Updated this week
- Understanding R1-Zero-Like Training: A Critical Perspectiveβ1,023Updated last week
- Parallel Scaling Law for Language Model β Beyond Parameter and Inference Time Scalingβ410Updated last month
- Scalable RL solution for advanced reasoning of language modelsβ1,650Updated 3 months ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attentionβ¦β1,067Updated 2 weeks ago
- Kimi-VL: Mixture-of-Experts Vision-Language Model for Multimodal Reasoning, Long-Context Understanding, and Strong Agent Capabilitiesβ956Updated 3 weeks ago
- Official Repo for Open-Reasoner-Zeroβ1,985Updated last month
- Large Reasoning Modelsβ805Updated 7 months ago
- An Open-source RL System from ByteDance Seed and Tsinghua AIRβ1,421Updated 2 months ago
- Ring attention implementation with flash attentionβ800Updated last week
- Unleashing the Power of Reinforcement Learning for Math and Code Reasonersβ645Updated last month
- Implementation of the sparse attention pattern proposed by the Deepseek team in their "Native Sparse Attention" paperβ667Updated last month
- Pretraining code for a large-scale depth-recurrent language modelβ793Updated last month
- Fast, Flexible and Portable Structured Generationβ1,052Updated last week
- [COLM 2025] LIMO: Less is More for Reasoningβ977Updated this week
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.β735Updated 9 months ago
- Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3.β1,384Updated this week
- Recipes to scale inference-time compute of open modelsβ1,101Updated last month
- An Open Large Reasoning Model for Real-World Solutionsβ1,504Updated last month
- β485Updated this week
- Minimalistic large language model 3D-parallelism trainingβ2,012Updated this week
- DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Modelsβ1,747Updated last year
- MMaDA - Open-Sourced Multimodal Large Diffusion Language Modelsβ1,176Updated last month
- β728Updated last month
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, sparsβ¦β341Updated 7 months ago