facebookresearch / optimizersLinks
For optimization algorithm research and development.
☆518Updated this week
Alternatives and similar repositories for optimizers
Users that are interested in optimizers are comparing it to the libraries listed below
Sorting:
- Implementation of Diffusion Transformer (DiT) in JAX☆276Updated 11 months ago
- ☆267Updated 10 months ago
- Library for reading and processing ML training data.☆447Updated this week
- ☆431Updated 7 months ago
- TensorDict is a pytorch dedicated tensor container.☆925Updated this week
- Annotated version of the Mamba paper☆482Updated last year
- Universal Tensor Operations in Einstein-Inspired Notation for Python.☆374Updated last month
- Puzzles for exploring transformers☆347Updated 2 years ago
- MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvement…☆381Updated this week
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆544Updated this week
- A Jax-based library for building transformers, includes implementations of GPT, Gemma, LlaMa, Mixtral, Whisper, SWin, ViT and more.☆287Updated 9 months ago
- Helpful tools and examples for working with flex-attention☆802Updated last week
- Named tensors with first-class dimensions for PyTorch☆329Updated last year
- 🧱 Modula software package☆194Updated 2 months ago
- Scalable and Performant Data Loading☆267Updated last week
- ☆301Updated 11 months ago
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆584Updated this week
- Efficient optimizers☆206Updated this week
- Transform datasets at scale. Optimize datasets for fast AI model training.☆482Updated this week
- jax-triton contains integrations between JAX and OpenAI Triton☆392Updated this week
- Home for "How To Scale Your Model", a short blog-style textbook about scaling LLMs on TPUs☆380Updated last month
- CIFAR-10 speedruns: 94% in 2.6 seconds and 96% in 27 seconds☆237Updated 3 months ago
- ☆182Updated 5 months ago
- Best practices & guides on how to write distributed pytorch training code☆427Updated 3 months ago
- An implementation of PSGD Kron second-order optimizer for PyTorch☆91Updated 2 months ago
- Official Implementation of "ADOPT: Modified Adam Can Converge with Any β2 with the Optimal Rate"☆424Updated 5 months ago
- CLU lets you write beautiful training loops in JAX.☆343Updated last month
- Orbax provides common checkpointing and persistence utilities for JAX users☆382Updated this week
- What would you do with 1000 H100s...☆1,048Updated last year
- Implementation of https://srush.github.io/annotated-s4☆495Updated 2 years ago