fal-ai-community / NativeSparseAttentionLinks
research impl of Native Sparse Attention (2502.11089)
☆54Updated 4 months ago
Alternatives and similar repositories for NativeSparseAttention
Users that are interested in NativeSparseAttention are comparing it to the libraries listed below
Sorting:
- supporting pytorch FSDP for optimizers☆82Updated 7 months ago
- Focused on fast experimentation and simplicity☆76Updated 6 months ago
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrun☆54Updated 4 months ago
- ☆79Updated last year
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆127Updated 10 months ago
- Mixture of A Million Experts☆46Updated 11 months ago
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆64Updated 2 weeks ago
- Implementation of 2-simplicial attention proposed by Clift et al. (2019) and the recent attempt to make practical in Fast and Simplex, Ro…☆34Updated this week
- ☆21Updated 8 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆138Updated last month
- DeMo: Decoupled Momentum Optimization☆189Updated 7 months ago
- ☆59Updated 3 months ago
- ☆61Updated 8 months ago
- Fast, Modern, and Low Precision PyTorch Optimizers☆97Updated this week
- ☆34Updated 10 months ago
- Explorations into the recently proposed Taylor Series Linear Attention☆99Updated 10 months ago
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆64Updated last year
- WIP☆93Updated 11 months ago
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated 2 months ago
- DPO, but faster 🚀☆43Updated 7 months ago
- The evaluation framework for training-free sparse attention in LLMs☆82Updated 3 weeks ago
- RWKV-7: Surpassing GPT☆92Updated 7 months ago
- Collection of autoregressive model implementation☆85Updated 2 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆129Updated last year
- An implementation of the Llama architecture, to instruct and delight☆21Updated last month
- ☆112Updated last year
- ☆19Updated last month
- Griffin MQA + Hawk Linear RNN Hybrid☆87Updated last year
- Implementations of attention with the softpick function, naive and FlashAttention-2☆80Updated 2 months ago
- ☆24Updated 2 months ago