cloneofsimo / ezmup
Simple implementation of muP, based on Spectral Condition for Feature Learning. The implementation is SGD only, dont use it for Adam
☆76Updated 9 months ago
Alternatives and similar repositories for ezmup:
Users that are interested in ezmup are comparing it to the libraries listed below
- These papers will provide unique insightful concepts that will broaden your perspective on neural networks and deep learning☆48Updated last year
- supporting pytorch FSDP for optimizers☆80Updated 4 months ago
- ☆51Updated last year
- ☆33Updated 7 months ago
- ☆78Updated 10 months ago
- WIP☆93Updated 8 months ago
- ☆28Updated 5 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆123Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆105Updated this week
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrun☆49Updated last month
- Focused on fast experimentation and simplicity☆71Updated 4 months ago
- Latent Diffusion Language Models☆68Updated last year
- LoRA for arbitrary JAX models and functions☆136Updated last year
- Sparse Autoencoders for Stable Diffusion XL models.☆55Updated 3 weeks ago
- ☆60Updated 5 months ago
- research impl of Native Sparse Attention (2502.11089)☆53Updated 2 months ago
- ☆27Updated last year
- ☆19Updated last month
- A JAX implementation of the continuous time formulation of Consistency Models☆84Updated 2 years ago
- Language models scale reliably with over-training and on downstream tasks☆96Updated last year
- Automatically take good care of your preemptible TPUs☆36Updated last year
- Fast, Modern, Memory Efficient, and Low Precision PyTorch Optimizers☆92Updated 9 months ago
- Explorations into the recently proposed Taylor Series Linear Attention☆99Updated 8 months ago
- Efficient optimizers☆190Updated this week
- ☆53Updated last year
- Implementation of Infini-Transformer in Pytorch☆110Updated 4 months ago
- ☆95Updated last year
- ☆22Updated 10 months ago
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆53Updated last year
- Flexibly track outputs and grad-outputs of torch.nn.Module.☆13Updated last year