ClashLuke / SOAPLinks
☆21Updated last year
Alternatives and similar repositories for SOAP
Users that are interested in SOAP are comparing it to the libraries listed below
Sorting:
- ☆34Updated last year
- Fast, Modern, and Low Precision PyTorch Optimizers☆116Updated 2 months ago
- ☆19Updated 6 months ago
- ☆82Updated last year
- supporting pytorch FSDP for optimizers☆84Updated 11 months ago
- ☆50Updated last year
- research impl of Native Sparse Attention (2502.11089)☆63Updated 9 months ago
- H-Net Dynamic Hierarchical Architecture☆80Updated 2 months ago
- Collection of autoregressive model implementation☆86Updated 6 months ago
- Triton Implementation of HyperAttention Algorithm☆48Updated last year
- ☆53Updated last year
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆103Updated 10 months ago
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆18Updated 3 months ago
- Implementation of GateLoop Transformer in Pytorch and Jax☆90Updated last year
- ☆91Updated last year
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆62Updated 2 years ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Updated 5 months ago
- Utilities for PyTorch distributed☆25Updated 8 months ago
- ☆20Updated 2 years ago
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆66Updated last year
- HomebrewNLP in JAX flavour for maintable TPU-Training☆51Updated last year
- ☆61Updated last year
- LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergence☆61Updated 3 years ago
- ☆23Updated 11 months ago
- Latent Diffusion Language Models☆69Updated 2 years ago
- Simple implementation of muP, based on Spectral Condition for Feature Learning. The implementation is SGD only, dont use it for Adam☆85Updated last year
- Automatically take good care of your preemptible TPUs☆37Updated 2 years ago
- Code for the paper "Function-Space Learning Rates"☆23Updated 5 months ago
- GoldFinch and other hybrid transformer components☆45Updated last year