facebookresearch / optimizersLinks
For optimization algorithm research and development.
☆543Updated last week
Alternatives and similar repositories for optimizers
Users that are interested in optimizers are comparing it to the libraries listed below
Sorting:
- MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvement…☆400Updated last week
- TensorDict is a pytorch dedicated tensor container.☆980Updated 3 weeks ago
- Annotated version of the Mamba paper☆490Updated last year
- ☆310Updated last year
- Efficient optimizers☆276Updated 3 weeks ago
- Scalable and Performant Data Loading☆335Updated this week
- Implementation of Diffusion Transformer (DiT) in JAX☆294Updated last year
- Universal Notation for Tensor Operations in Python.☆447Updated 7 months ago
- CIFAR-10 speedruns: 94% in 2.6 seconds and 96% in 27 seconds☆322Updated 3 months ago
- ☆285Updated last year
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆679Updated this week
- The AdEMAMix Optimizer: Better, Faster, Older.☆186Updated last year
- A Jax-based library for building transformers, includes implementations of GPT, Gemma, LlaMa, Mixtral, Whisper, SWin, ViT and more.☆297Updated last year
- Named tensors with first-class dimensions for PyTorch☆331Updated 2 years ago
- Official Implementation of "ADOPT: Modified Adam Can Converge with Any β2 with the Optimal Rate"☆426Updated 11 months ago
- 🧱 Modula software package☆303Updated 2 months ago
- Dion optimizer algorithm☆383Updated this week
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆582Updated 3 months ago
- Library for reading and processing ML training data.☆593Updated this week
- Puzzles for exploring transformers☆376Updated 2 years ago
- ☆457Updated last year
- jax-triton contains integrations between JAX and OpenAI Triton☆433Updated last month
- An implementation of PSGD Kron second-order optimizer for PyTorch☆96Updated 3 months ago
- ☆222Updated 11 months ago
- Implementation of Flash Attention in Jax☆220Updated last year
- Best practices & guides on how to write distributed pytorch training code☆536Updated 3 weeks ago
- ☆177Updated last year
- Speed up model training by fixing data loading.☆556Updated last week
- ☆150Updated last year
- UNet diffusion model in pure CUDA☆654Updated last year