cloneofsimo / zeroshampooLinks
☆34Updated last year
Alternatives and similar repositories for zeroshampoo
Users that are interested in zeroshampoo are comparing it to the libraries listed below
Sorting:
- ☆19Updated 4 months ago
- ☆23Updated last year
- Simple implementation of muP, based on Spectral Condition for Feature Learning. The implementation is SGD only, dont use it for Adam☆85Updated last year
- Latent Diffusion Language Models☆69Updated 2 years ago
- Automatically take good care of your preemptible TPUs☆36Updated 2 years ago
- ☆89Updated last year
- Code for the paper "Function-Space Learning Rates"☆23Updated 4 months ago
- Utilities for PyTorch distributed☆25Updated 7 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- supporting pytorch FSDP for optimizers☆84Updated 9 months ago
- Focused on fast experimentation and simplicity☆75Updated 9 months ago
- research impl of Native Sparse Attention (2502.11089)☆61Updated 7 months ago
- A JAX implementation of the continuous time formulation of Consistency Models☆85Updated 2 years ago
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆44Updated 2 years ago
- ☆21Updated 10 months ago
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrun☆56Updated 6 months ago
- WIP☆93Updated last year
- ☆53Updated last year
- ☆53Updated last year
- Exploring an idea where one forgets about efficiency and carries out attention across each edge of the nodes (tokens)☆54Updated 6 months ago
- ☆58Updated last year
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆102Updated 9 months ago
- Explorations into the recently proposed Taylor Series Linear Attention☆100Updated last year
- ☆82Updated last year
- Train vision models using JAX and 🤗 transformers☆100Updated 2 weeks ago
- FID computation in Jax/Flax.☆28Updated last year
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆53Updated last year
- LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergence☆59Updated 3 years ago
- Exploration into the proposed "Self Reasoning Tokens" by Felipe Bonetto☆57Updated last year
- Fast, Modern, and Low Precision PyTorch Optimizers☆112Updated 3 weeks ago