apple / ml-sigmoid-attentionLinks
☆304Updated 8 months ago
Alternatives and similar repositories for ml-sigmoid-attention
Users that are interested in ml-sigmoid-attention are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] Official PyTorch Implementation of Gated Delta Networks: Improving Mamba2 with Delta Rule☆415Updated 3 months ago
- [NeurIPS 2025 Spotlight] TPA: Tensor ProducT ATTenTion Transformer (T6) (https://arxiv.org/abs/2501.06425)☆443Updated 3 weeks ago
- Normalized Transformer (nGPT)☆195Updated last year
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆233Updated 2 months ago
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAI☆294Updated 7 months ago
- Some preliminary explorations of Mamba's context scaling.☆218Updated last year
- [ICLR2025 Spotlight🔥] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters☆581Updated 11 months ago
- implementations and experimentation on mHC by deepseek - https://arxiv.org/abs/2512.24880☆202Updated last week
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆225Updated 6 months ago
- ☆204Updated last year
- ☆263Updated 7 months ago
- When it comes to optimizers, it's always better to be safe than sorry☆397Updated 3 months ago
- Official implementation of Phi-Mamba. A MOHAWK-distilled model (Transformers to SSMs: Distilling Quadratic Knowledge to Subquadratic Mode…☆118Updated last year
- The AdEMAMix Optimizer: Better, Faster, Older.☆186Updated last year
- [ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning☆134Updated 3 weeks ago
- 🔥 A minimal training framework for scaling FLA models☆333Updated last month
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆449Updated 7 months ago
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch☆340Updated 9 months ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆244Updated 7 months ago
- Official JAX implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden States☆445Updated 2 months ago
- Implementation of the sparse attention pattern proposed by the Deepseek team in their "Native Sparse Attention" paper☆791Updated 4 months ago
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆549Updated 7 months ago
- Implementation of 🥥 Coconut, Chain of Continuous Thought, in Pytorch☆181Updated 6 months ago
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆132Updated 2 months ago
- Annotated version of the Mamba paper☆493Updated last year
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆338Updated 10 months ago
- ☆207Updated 3 weeks ago
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆249Updated 11 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆181Updated 6 months ago
- Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆111Updated last month