fal-ai-community / NativeSparseAttention
research impl of Native Sparse Attention (2502.11089)
☆54Updated 2 months ago
Alternatives and similar repositories for NativeSparseAttention
Users that are interested in NativeSparseAttention are comparing it to the libraries listed below
Sorting:
- supporting pytorch FSDP for optimizers☆80Updated 5 months ago
- Focused on fast experimentation and simplicity☆72Updated 4 months ago
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrun☆50Updated 2 months ago
- ☆21Updated 6 months ago
- Experiment of using Tangent to autodiff triton☆78Updated last year
- ☆79Updated 10 months ago
- ☆19Updated last month
- WIP☆93Updated 9 months ago
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆124Updated 8 months ago
- ☆33Updated 8 months ago
- Simple implementation of muP, based on Spectral Condition for Feature Learning. The implementation is SGD only, dont use it for Adam☆76Updated 9 months ago
- Collection of autoregressive model implementation☆85Updated 3 weeks ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆103Updated last week
- Fast, Modern, Memory Efficient, and Low Precision PyTorch Optimizers☆92Updated 10 months ago
- Normalized Transformer (nGPT)☆176Updated 5 months ago
- [ICLR 2025] Official PyTorch implementation of "Forgetting Transformer: Softmax Attention with a Forget Gate"☆99Updated this week
- ☆28Updated 5 months ago
- ☆55Updated last month
- ☆53Updated last year
- Here we will test various linear attention designs.☆60Updated last year
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated last week
- Explorations into the recently proposed Taylor Series Linear Attention☆99Updated 8 months ago
- ☆106Updated 11 months ago
- DeMo: Decoupled Momentum Optimization☆186Updated 5 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆123Updated last year
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆98Updated 2 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆120Updated this week
- Triton Implementation of HyperAttention Algorithm☆48Updated last year
- Griffin MQA + Hawk Linear RNN Hybrid☆86Updated last year
- ☆60Updated 6 months ago