MoonshotAI / Moonlight
Muon is Scalable for LLM Training
☆1,022Updated 3 weeks ago
Alternatives and similar repositories for Moonlight:
Users that are interested in Moonlight are comparing it to the libraries listed below
- MoBA: Mixture of Block Attention for Long-Context LLMs☆1,746Updated 2 weeks ago
- Official Repo for Open-Reasoner-Zero☆1,872Updated last week
- ☆662Updated this week
- Scalable RL solution for advanced reasoning of language models☆1,488Updated last month
- Understanding R1-Zero-Like Training: A Critical Perspective☆863Updated this week
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆631Updated last month
- OLMoE: Open Mixture-of-Experts Language Models☆713Updated last month
- Large Reasoning Models☆802Updated 4 months ago
- Dream 7B, a large diffusion language model☆551Updated last week
- [NeurIPS'24 Spotlight, ICLR'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention, which r…☆971Updated this week
- Recipes to scale inference-time compute of open models☆1,055Updated last month
- Kimi-VL: Mixture-of-Experts Vision-Language Model for Multimodal Reasoning, Long-Context Understanding, and Strong Agent Capabilities☆685Updated this week
- Muon optimizer: +>30% sample efficiency with <3% wallclock overhead☆575Updated 3 weeks ago
- An Open-source RL System from ByteDance Seed and Tsinghua AIR☆1,141Updated last week
- Pretraining code for a large-scale depth-recurrent language model☆743Updated last week
- Implementation of the sparse attention pattern proposed by the Deepseek team in their "Native Sparse Attention" paper☆593Updated 3 weeks ago
- Official PyTorch implementation for "Large Language Diffusion Models"☆1,492Updated 2 weeks ago
- LIMO: Less is More for Reasoning☆913Updated 2 weeks ago
- Minimalistic 4D-parallelism distributed training framework for education purpose☆987Updated last month
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆317Updated 4 months ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆715Updated 6 months ago
- An Open Large Reasoning Model for Real-World Solutions☆1,484Updated last month
- Unleashing the Power of Reinforcement Learning for Math and Code Reasoners☆490Updated this week
- FlashInfer: Kernel Library for LLM Serving☆2,659Updated last week
- Explore the Multimodal “Aha Moment” on 2B Model☆572Updated last month
- RAGEN leverages reinforcement learning to train LLM reasoning agents in interactive, stochastic environments.☆1,382Updated this week
- Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3.☆1,183Updated this week
- ☆630Updated 3 weeks ago
- Minimalistic large language model 3D-parallelism training☆1,793Updated this week
- Ring attention implementation with flash attention☆737Updated last week