warner-benjamin / optimiLinks
Fast, Modern, and Low Precision PyTorch Optimizers
☆108Updated 3 weeks ago
Alternatives and similar repositories for optimi
Users that are interested in optimi are comparing it to the libraries listed below
Sorting:
- supporting pytorch FSDP for optimizers☆84Updated 8 months ago
- ☆118Updated last year
- ☆20Updated 2 years ago
- ☆87Updated last year
- ☆49Updated last year
- some common Huggingface transformers in maximal update parametrization (µP)☆82Updated 3 years ago
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆107Updated 5 months ago
- HomebrewNLP in JAX flavour for maintable TPU-Training☆50Updated last year
- Various transformers for FSDP research☆38Updated 2 years ago
- ☆82Updated last year
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆116Updated 2 years ago
- A repository for log-time feedforward networks☆223Updated last year
- Automatically take good care of your preemptible TPUs☆36Updated 2 years ago
- ☆21Updated 9 months ago
- ☆53Updated last year
- Griffin MQA + Hawk Linear RNN Hybrid☆88Updated last year
- Easily run PyTorch on multiple GPUs & machines☆46Updated last month
- Efficient optimizers☆254Updated 3 weeks ago
- DeMo: Decoupled Momentum Optimization☆190Updated 8 months ago
- ☆34Updated 11 months ago
- A library for unit scaling in PyTorch☆129Updated last month
- An implementation of the Llama architecture, to instruct and delight☆21Updated 2 months ago
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆101Updated 8 months ago
- Load compute kernels from the Hub☆244Updated this week
- Triton Implementation of HyperAttention Algorithm☆48Updated last year
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆130Updated last year
- LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergence☆59Updated 3 years ago
- Understand and test language model architectures on synthetic tasks.☆221Updated last month
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆152Updated last month
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/…☆27Updated last year