fal-ai-community / NativeSparseAttentionLinks
research impl of Native Sparse Attention (2502.11089)
☆54Updated 4 months ago
Alternatives and similar repositories for NativeSparseAttention
Users that are interested in NativeSparseAttention are comparing it to the libraries listed below
Sorting:
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrun☆52Updated 3 months ago
- supporting pytorch FSDP for optimizers☆82Updated 6 months ago
- Focused on fast experimentation and simplicity☆74Updated 6 months ago
- Collection of autoregressive model implementation☆85Updated 2 months ago
- ☆78Updated 11 months ago
- ☆21Updated 7 months ago
- Triton Implementation of HyperAttention Algorithm☆48Updated last year
- Explorations into the recently proposed Taylor Series Linear Attention☆99Updated 10 months ago
- ☆34Updated 9 months ago
- Experiment of using Tangent to autodiff triton☆79Updated last year
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Updated 2 weeks ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆131Updated last week
- ☆79Updated 10 months ago
- Fast, Modern, and Low Precision PyTorch Optimizers☆94Updated this week
- Simple implementation of muP, based on Spectral Condition for Feature Learning. The implementation is SGD only, dont use it for Adam☆80Updated 10 months ago
- WIP☆93Updated 10 months ago
- ☆56Updated 3 months ago
- ☆19Updated last month
- Here we will test various linear attention designs.☆59Updated last year
- ☆28Updated 6 months ago
- Combining SOAP and MUON☆16Updated 4 months ago
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆45Updated 11 months ago
- GoldFinch and other hybrid transformer components☆45Updated 11 months ago
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆127Updated 10 months ago
- Mixture of A Million Experts☆46Updated 10 months ago
- Latent Diffusion Language Models☆68Updated last year
- [ICLR 2025] Official PyTorch implementation of "Forgetting Transformer: Softmax Attention with a Forget Gate"☆108Updated last month
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated last month
- Implementation of GateLoop Transformer in Pytorch and Jax☆89Updated last year
- Utilities for PyTorch distributed☆24Updated 3 months ago