KellerJordan / Muon
Muon optimizer for neural networks: >30% extra sample efficiency, <3% wallclock overhead
☆210Updated last week
Alternatives and similar repositories for Muon:
Users that are interested in Muon are comparing it to the libraries listed below
- Normalized Transformer (nGPT)☆145Updated last month
- ☆180Updated this week
- supporting pytorch FSDP for optimizers☆75Updated last month
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆219Updated last month
- ☆146Updated last month
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAI☆270Updated 2 months ago
- Efficient optimizers☆144Updated this week
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆115Updated 4 months ago
- DeMo: Decoupled Momentum Optimization☆170Updated last month
- Understand and test language model architectures on synthetic tasks.☆175Updated this week
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆376Updated last month
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆277Updated last month
- A MAD laboratory to improve AI architecture designs 🧪☆102Updated last month
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆182Updated 7 months ago
- 🧱 Modula software package☆132Updated this week
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆203Updated 3 weeks ago
- ☆296Updated 6 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆90Updated last month
- Some preliminary explorations of Mamba's context scaling.☆206Updated 11 months ago
- ☆53Updated 11 months ago
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆492Updated 2 months ago
- Accelerated First Order Parallel Associative Scan☆169Updated 4 months ago
- When it comes to optimizers, it's always better to be safe than sorry☆157Updated this week
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆121Updated 9 months ago
- ☆240Updated 4 months ago
- ☆201Updated 6 months ago
- The AdEMAMix Optimizer: Better, Faster, Older.☆178Updated 4 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆110Updated last month
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆505Updated 2 months ago
- PyTorch implementation of models from the Zamba2 series.☆166Updated last month