warner-benjamin / optimiLinks
Fast, Modern, and Low Precision PyTorch Optimizers
☆119Updated 2 weeks ago
Alternatives and similar repositories for optimi
Users that are interested in optimi are comparing it to the libraries listed below
Sorting:
- ☆124Updated last year
- supporting pytorch FSDP for optimizers☆84Updated last year
- ☆92Updated last year
- ☆20Updated 2 years ago
- ☆50Updated last year
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆112Updated 2 months ago
- some common Huggingface transformers in maximal update parametrization (µP)☆87Updated 3 years ago
- ☆82Updated last year
- HomebrewNLP in JAX flavour for maintable TPU-Training☆51Updated last year
- Various transformers for FSDP research☆38Updated 3 years ago
- ☆22Updated last year
- A library for unit scaling in PyTorch☆133Updated 6 months ago
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆66Updated last year
- DeMo: Decoupled Momentum Optimization☆198Updated last year
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆116Updated 2 years ago
- Experiment of using Tangent to autodiff triton☆81Updated last year
- ☆53Updated last year
- Automatically take good care of your preemptible TPUs☆37Updated 2 years ago
- ☆34Updated last year
- research impl of Native Sparse Attention (2502.11089)☆63Updated 10 months ago
- Efficient optimizers☆280Updated 3 weeks ago
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆103Updated last year
- LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergence☆61Updated 3 years ago
- Collection of autoregressive model implementation☆85Updated this week
- Griffin MQA + Hawk Linear RNN Hybrid☆88Updated last year
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆18Updated 5 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆181Updated 6 months ago
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆46Updated last year
- A place to store reusable transformer components of my own creation or found on the interwebs☆71Updated this week
- Utilities for Training Very Large Models☆58Updated last year