MoonshotAI / MoonlightLinks
Muon is Scalable for LLM Training
β1,223Updated 4 months ago
Alternatives and similar repositories for Moonlight
Users that are interested in Moonlight are comparing it to the libraries listed below
Sorting:
- π³ Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"β731Updated 4 months ago
- MoBA: Mixture of Block Attention for Long-Context LLMsβ1,846Updated 3 months ago
- β800Updated last month
- Dream 7B, a large diffusion language modelβ857Updated last month
- Muon is an optimizer for hidden layers in neural networksβ1,390Updated 3 weeks ago
- slime is a LLM post-training framework aiming for RL Scaling.β975Updated this week
- OLMoE: Open Mixture-of-Experts Language Modelsβ823Updated 4 months ago
- Understanding R1-Zero-Like Training: A Critical Perspectiveβ1,048Updated last week
- Parallel Scaling Law for Language Model β Beyond Parameter and Inference Time Scalingβ417Updated 2 months ago
- Implementation of the sparse attention pattern proposed by the Deepseek team in their "Native Sparse Attention" paperβ673Updated last month
- Official Repo for Open-Reasoner-Zeroβ2,008Updated 2 months ago
- An Open-source RL System from ByteDance Seed and Tsinghua AIRβ1,470Updated 2 months ago
- Scalable RL solution for advanced reasoning of language modelsβ1,668Updated 4 months ago
- Kimi-VL: Mixture-of-Experts Vision-Language Model for Multimodal Reasoning, Long-Context Understanding, and Strong Agent Capabilitiesβ1,014Updated 2 weeks ago
- Ring attention implementation with flash attentionβ828Updated last week
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attentionβ¦β1,080Updated this week
- An Open Large Reasoning Model for Real-World Solutionsβ1,510Updated 2 months ago
- Large Reasoning Modelsβ804Updated 7 months ago
- Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Modelsβ747Updated 3 weeks ago
- Pretraining and inference code for a large-scale depth-recurrent language modelβ806Updated 2 weeks ago
- Unleashing the Power of Reinforcement Learning for Math and Code Reasonersβ689Updated last month
- Training Large Language Model to Reason in a Continuous Latent Spaceβ1,216Updated 6 months ago
- Scalable toolkit for efficient model reinforcementβ558Updated this week
- Recipes to scale inference-time compute of open modelsβ1,110Updated 2 months ago
- TransMLA: Multi-Head Latent Attention Is All You Needβ331Updated 2 weeks ago
- MMaDA - Open-Sourced Multimodal Large Diffusion Language Modelsβ1,254Updated last month
- Minimalistic large language model 3D-parallelism trainingβ2,068Updated 3 weeks ago
- [COLM 2025] LIMO: Less is More for Reasoningβ986Updated 3 weeks ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.β739Updated 10 months ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, sparsβ¦β342Updated 7 months ago