facebookresearch / optimizersLinks
For optimization algorithm research and development.
☆525Updated this week
Alternatives and similar repositories for optimizers
Users that are interested in optimizers are comparing it to the libraries listed below
Sorting:
- Annotated version of the Mamba paper☆487Updated last year
- MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvement…☆389Updated this week
- TensorDict is a pytorch dedicated tensor container.☆949Updated this week
- Scalable and Performant Data Loading☆291Updated this week
- Implementation of Diffusion Transformer (DiT) in JAX☆286Updated last year
- Universal Tensor Operations in Einstein-Inspired Notation for Python.☆392Updated 3 months ago
- Efficient optimizers☆252Updated last week
- ☆304Updated last year
- ☆275Updated last year
- CIFAR-10 speedruns: 94% in 2.6 seconds and 96% in 27 seconds☆274Updated 2 weeks ago
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆630Updated this week
- A Jax-based library for building transformers, includes implementations of GPT, Gemma, LlaMa, Mixtral, Whisper, SWin, ViT and more.☆290Updated 11 months ago
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆565Updated this week
- Named tensors with first-class dimensions for PyTorch☆332Updated 2 years ago
- Library for reading and processing ML training data.☆487Updated this week
- Best practices & guides on how to write distributed pytorch training code☆460Updated 5 months ago
- The AdEMAMix Optimizer: Better, Faster, Older.☆184Updated 10 months ago
- Puzzles for exploring transformers☆356Updated 2 years ago
- ☆206Updated 8 months ago
- Implementation of https://srush.github.io/annotated-s4☆500Updated last month
- Helpful tools and examples for working with flex-attention☆904Updated 2 weeks ago
- PyTorch Single Controller☆341Updated last week
- 🧱 Modula software package☆210Updated last week
- ☆443Updated 9 months ago
- jax-triton contains integrations between JAX and OpenAI Triton☆411Updated last month
- Official Implementation of "ADOPT: Modified Adam Can Converge with Any β2 with the Optimal Rate"☆429Updated 7 months ago
- Home for "How To Scale Your Model", a short blog-style textbook about scaling LLMs on TPUs☆445Updated this week
- Implementation of Flash Attention in Jax☆215Updated last year
- Transform datasets at scale. Optimize datasets for fast AI model training.☆516Updated this week
- UNet diffusion model in pure CUDA☆613Updated last year