ClashLuke / SOAPLinks
☆22Updated last year
Alternatives and similar repositories for SOAP
Users that are interested in SOAP are comparing it to the libraries listed below
Sorting:
- ☆34Updated last year
- H-Net Dynamic Hierarchical Architecture☆81Updated 4 months ago
- supporting pytorch FSDP for optimizers☆84Updated last year
- ☆50Updated last year
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- research impl of Native Sparse Attention (2502.11089)☆63Updated 11 months ago
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆103Updated last year
- Fast, Modern, and Low Precision PyTorch Optimizers☆124Updated last month
- Triton Implementation of HyperAttention Algorithm☆48Updated 2 years ago
- ☆19Updated 2 months ago
- ☆53Updated 2 years ago
- ☆82Updated last year
- Collection of autoregressive model implementation☆85Updated 3 weeks ago
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆18Updated 6 months ago
- LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergence☆61Updated 3 years ago
- Fork of Flame repo for training of some new stuff in development☆19Updated last month
- DeMo: Decoupled Momentum Optimization☆198Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- HomebrewNLP in JAX flavour for maintable TPU-Training☆51Updated 2 years ago
- Explorations into adversarial losses on top of autoregressive loss for language modeling☆41Updated last month
- Landing repository for the paper "Softpick: No Attention Sink, No Massive Activations with Rectified Softmax"☆86Updated 4 months ago
- Implementation of Gradient Agreement Filtering, from Chaubard et al. of Stanford, but for single machine microbatches, in Pytorch☆25Updated last year
- Shaping capabilities with token-level pretraining data filtering☆75Updated last week
- ☆92Updated last year
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆53Updated 2 years ago
- Explorations into the recently proposed Taylor Series Linear Attention☆100Updated last year
- GoldFinch and other hybrid transformer components☆45Updated last year
- ☆20Updated 2 years ago
- Latent Diffusion Language Models☆70Updated 2 years ago
- Utilities for PyTorch distributed☆25Updated 11 months ago