ethansmith2000 / fsdp_optimizersLinks
supporting pytorch FSDP for optimizers
☆83Updated 11 months ago
Alternatives and similar repositories for fsdp_optimizers
Users that are interested in fsdp_optimizers are comparing it to the libraries listed below
Sorting:
- WIP☆93Updated last year
- ☆91Updated last year
- Efficient optimizers☆276Updated 3 weeks ago
- Supporting code for the blog post on modular manifolds.☆101Updated last month
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆171Updated 4 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- Simple implementation of muP, based on Spectral Condition for Feature Learning. The implementation is SGD only, dont use it for Adam☆85Updated last year
- A library for unit scaling in PyTorch☆132Updated 4 months ago
- research impl of Native Sparse Attention (2502.11089)☆63Updated 8 months ago
- ☆53Updated last year
- Accelerated First Order Parallel Associative Scan☆189Updated last year
- Focused on fast experimentation and simplicity☆75Updated 10 months ago
- ☆221Updated 11 months ago
- DeMo: Decoupled Momentum Optimization☆196Updated 11 months ago
- Fast, Modern, and Low Precision PyTorch Optimizers☆116Updated 2 months ago
- Understand and test language model architectures on synthetic tasks.☆237Updated last month
- ☆121Updated last year
- ☆68Updated 11 months ago
- Experiment of using Tangent to autodiff triton☆79Updated last year
- seqax = sequence modeling + JAX☆168Updated 3 months ago
- 🧱 Modula software package☆300Updated 2 months ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆241Updated 5 months ago
- A MAD laboratory to improve AI architecture designs 🧪☆132Updated 10 months ago
- ☆34Updated last year
- ☆53Updated last year
- ☆61Updated last year
- Normalized Transformer (nGPT)☆192Updated 11 months ago
- JAX bindings for Flash Attention v2☆97Updated last week
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆70Updated last year
- LoRA for arbitrary JAX models and functions☆142Updated last year