microsoft / dionLinks
Dion optimizer algorithm
☆343Updated 2 weeks ago
Alternatives and similar repositories for dion
Users that are interested in dion are comparing it to the libraries listed below
Sorting:
- Simple & Scalable Pretraining for Neural Architecture Research☆293Updated 3 weeks ago
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆274Updated last month
- 🧱 Modula software package☆237Updated last month
- PyTorch Single Controller☆419Updated this week
- Normalized Transformer (nGPT)☆188Updated 10 months ago
- ☆281Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆162Updated 2 months ago
- supporting pytorch FSDP for optimizers☆84Updated 9 months ago
- CIFAR-10 speedruns: 94% in 2.6 seconds and 96% in 27 seconds☆301Updated 2 months ago
- Load compute kernels from the Hub☆283Updated this week
- Open-source framework for the research and development of foundation models.☆439Updated this week
- ☆217Updated 7 months ago
- An implementation of PSGD Kron second-order optimizer for PyTorch☆96Updated last month
- Efficient optimizers☆261Updated this week
- ☆88Updated last year
- Home for "How To Scale Your Model", a short blog-style textbook about scaling LLMs on TPUs☆605Updated last week
- This is a zero-to-one guide on scaling modern transformers with n-dimensional parallelism.☆89Updated 2 weeks ago
- 📄Small Batch Size Training for Language Models☆62Updated 3 weeks ago
- DeMo: Decoupled Momentum Optimization☆190Updated 9 months ago
- Implementation of Diffusion Transformer (DiT) in JAX☆291Updated last year
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆185Updated 3 months ago
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆401Updated 3 weeks ago
- NanoGPT-speedrunning for the poor T4 enjoyers☆71Updated 4 months ago
- Getting crystal-like representations with harmonic loss☆194Updated 5 months ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆193Updated 3 months ago
- Minimal yet performant LLM examples in pure JAX☆158Updated last week
- ☆67Updated 10 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- ☆210Updated 9 months ago
- ☆187Updated last month