ClashLuke / SOAPLinks
☆22Updated last year
Alternatives and similar repositories for SOAP
Users that are interested in SOAP are comparing it to the libraries listed below
Sorting:
- ☆34Updated last year
- Fast, Modern, and Low Precision PyTorch Optimizers☆120Updated last month
- ☆53Updated 2 years ago
- ☆82Updated last year
- supporting pytorch FSDP for optimizers☆84Updated last year
- ☆50Updated last year
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆103Updated last year
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆18Updated 6 months ago
- H-Net Dynamic Hierarchical Architecture☆81Updated 4 months ago
- Triton Implementation of HyperAttention Algorithm☆48Updated 2 years ago
- ☆20Updated 2 years ago
- LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergence☆61Updated 3 years ago
- ☆19Updated last month
- DeMo: Decoupled Momentum Optimization☆198Updated last year
- Collection of autoregressive model implementation☆85Updated 2 weeks ago
- Utilities for PyTorch distributed☆25Updated 11 months ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Updated 7 months ago
- research impl of Native Sparse Attention (2502.11089)☆63Updated 11 months ago
- Implementation of GateLoop Transformer in Pytorch and Jax☆92Updated last year
- Explorations into adversarial losses on top of autoregressive loss for language modeling☆41Updated last month
- ☆92Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- 📄Small Batch Size Training for Language Models☆80Updated 3 months ago
- ☆91Updated last year
- GoldFinch and other hybrid transformer components☆45Updated last year
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated 8 months ago
- Griffin MQA + Hawk Linear RNN Hybrid☆88Updated last year
- NanoGPT-speedrunning for the poor T4 enjoyers☆73Updated 9 months ago
- Token Omission Via Attention☆128Updated last year