facebookresearch / optimizersLinks
For optimization algorithm research and development.
☆521Updated this week
Alternatives and similar repositories for optimizers
Users that are interested in optimizers are comparing it to the libraries listed below
Sorting:
- Annotated version of the Mamba paper☆485Updated last year
- MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvement…☆385Updated this week
- Implementation of Diffusion Transformer (DiT) in JAX☆278Updated last year
- ☆303Updated last year
- ☆270Updated 11 months ago
- Efficient optimizers☆212Updated last week
- TensorDict is a pytorch dedicated tensor container.☆930Updated this week
- Universal Tensor Operations in Einstein-Inspired Notation for Python.☆382Updated 2 months ago
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆595Updated this week
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆554Updated this week
- Helpful tools and examples for working with flex-attention☆831Updated last week
- Named tensors with first-class dimensions for PyTorch☆331Updated 2 years ago
- jax-triton contains integrations between JAX and OpenAI Triton☆400Updated 2 weeks ago
- Scalable and Performant Data Loading☆277Updated this week
- ☆435Updated 8 months ago
- Best practices & guides on how to write distributed pytorch training code☆441Updated 3 months ago
- CIFAR-10 speedruns: 94% in 2.6 seconds and 96% in 27 seconds☆251Updated 3 months ago
- Library for reading and processing ML training data.☆457Updated this week
- Home for "How To Scale Your Model", a short blog-style textbook about scaling LLMs on TPUs☆397Updated last week
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAI☆284Updated 2 weeks ago
- Puzzles for exploring transformers☆349Updated 2 years ago
- ☆188Updated 6 months ago
- Effortless plugin and play Optimizer to cut model training costs by 50%. New optimizer that is 2x faster than Adam on LLMs.☆381Updated last year
- Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimenta…☆508Updated last week
- Official Implementation of "ADOPT: Modified Adam Can Converge with Any β2 with the Optimal Rate"☆425Updated 6 months ago
- An implementation of PSGD Kron second-order optimizer for PyTorch☆91Updated 2 months ago
- Implementation of Flash Attention in Jax☆213Updated last year
- The AdEMAMix Optimizer: Better, Faster, Older.☆183Updated 9 months ago
- ☆317Updated this week
- 🧱 Modula software package☆200Updated 2 months ago