facebookresearch / optimizersLinks
For optimization algorithm research and development.
☆543Updated this week
Alternatives and similar repositories for optimizers
Users that are interested in optimizers are comparing it to the libraries listed below
Sorting:
- MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvement…☆400Updated this week
- TensorDict is a pytorch dedicated tensor container.☆972Updated this week
- Annotated version of the Mamba paper☆489Updated last year
- Scalable and Performant Data Loading☆311Updated this week
- ☆309Updated last year
- Efficient optimizers☆274Updated last week
- Implementation of Diffusion Transformer (DiT) in JAX☆295Updated last year
- CIFAR-10 speedruns: 94% in 2.6 seconds and 96% in 27 seconds☆320Updated 3 months ago
- Universal Notation for Tensor Operations in Python.☆438Updated 6 months ago
- 🧱 Modula software package☆291Updated 2 months ago
- The AdEMAMix Optimizer: Better, Faster, Older.☆186Updated last year
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆671Updated this week
- A Jax-based library for building transformers, includes implementations of GPT, Gemma, LlaMa, Mixtral, Whisper, SWin, ViT and more.☆295Updated last year
- ☆283Updated last year
- Best practices & guides on how to write distributed pytorch training code☆517Updated this week
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆578Updated 2 months ago
- Official Implementation of "ADOPT: Modified Adam Can Converge with Any β2 with the Optimal Rate"☆425Updated 10 months ago
- Library for reading and processing ML training data.☆570Updated this week
- ☆456Updated last year
- ☆218Updated 10 months ago
- Puzzles for exploring transformers☆373Updated 2 years ago
- Helpful tools and examples for working with flex-attention☆1,029Updated this week
- An implementation of PSGD Kron second-order optimizer for PyTorch☆96Updated 3 months ago
- Implementation of Flash Attention in Jax☆219Updated last year
- ☆174Updated last year
- Dion optimizer algorithm☆369Updated 3 weeks ago
- jax-triton contains integrations between JAX and OpenAI Triton☆428Updated last week
- Load compute kernels from the Hub☆304Updated last week
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆420Updated last week
- Speed up model training by fixing data loading.☆551Updated last week