MoonshotAI / MoonlightLinks
Muon is Scalable for LLM Training
β1,336Updated 2 months ago
Alternatives and similar repositories for Moonlight
Users that are interested in Moonlight are comparing it to the libraries listed below
Sorting:
- π³ Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"β903Updated 7 months ago
- Dream 7B, a large diffusion language modelβ1,018Updated 3 weeks ago
- OLMoE: Open Mixture-of-Experts Language Modelsβ886Updated last month
- MoBA: Mixture of Block Attention for Long-Context LLMsβ1,941Updated 6 months ago
- β817Updated 4 months ago
- slime is an LLM post-training framework for RL Scaling.β2,170Updated this week
- Kimi-VL: Mixture-of-Experts Vision-Language Model for Multimodal Reasoning, Long-Context Understanding, and Strong Agent Capabilitiesβ1,071Updated 3 months ago
- Understanding R1-Zero-Like Training: A Critical Perspectiveβ1,126Updated last month
- VeOmni: Scaling Any Modality Model Training with Model-Centric Distributed Recipe Zooβ1,231Updated this week
- Official Repo for Open-Reasoner-Zeroβ2,054Updated 4 months ago
- Parallel Scaling Law for Language Model β Beyond Parameter and Inference Time Scalingβ447Updated 5 months ago
- Muon is an optimizer for hidden layers in neural networksβ1,888Updated 3 months ago
- Implementation of the sparse attention pattern proposed by the Deepseek team in their "Native Sparse Attention" paperβ772Updated 2 months ago
- Scalable toolkit for efficient model reinforcementβ931Updated last week
- Ring attention implementation with flash attentionβ901Updated last month
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attentionβ¦β1,141Updated 3 weeks ago
- An Open-source RL System from ByteDance Seed and Tsinghua AIRβ1,597Updated 5 months ago
- Pretraining and inference code for a large-scale depth-recurrent language modelβ836Updated last week
- Scalable RL solution for advanced reasoning of language modelsβ1,751Updated 7 months ago
- Minimalistic large language model 3D-parallelism trainingβ2,267Updated last month
- Training Large Language Model to Reason in a Continuous Latent Spaceβ1,297Updated 2 months ago
- β843Updated last week
- [NeurIPS 2025] MMaDA - Open-Sourced Multimodal Large Diffusion Language Modelsβ1,434Updated last week
- Large Reasoning Modelsβ805Updated 10 months ago
- Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Modelsβ856Updated 3 months ago
- Official PyTorch implementation for "Large Language Diffusion Models"β3,079Updated last week
- TransMLA: Multi-Head Latent Attention Is All You Need (NeurIPS 2025 Spotlight)β389Updated last month
- [COLM 2025] LIMO: Less is More for Reasoningβ1,037Updated 2 months ago
- β1,309Updated last month
- Minimalistic 4D-parallelism distributed training framework for education purposeβ1,856Updated last month