fal-ai-community / NativeSparseAttentionLinks
research impl of Native Sparse Attention (2502.11089)
☆61Updated 7 months ago
Alternatives and similar repositories for NativeSparseAttention
Users that are interested in NativeSparseAttention are comparing it to the libraries listed below
Sorting:
- Focused on fast experimentation and simplicity☆75Updated 9 months ago
- supporting pytorch FSDP for optimizers☆84Updated 9 months ago
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrun☆56Updated 6 months ago
- DeMo: Decoupled Momentum Optimization☆193Updated 10 months ago
- ☆34Updated last year
- ☆89Updated last year
- ☆64Updated 6 months ago
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆128Updated last year
- ☆19Updated 4 months ago
- ☆67Updated 10 months ago
- H-Net Dynamic Hierarchical Architecture☆80Updated 3 weeks ago
- Supporting code for the blog post on modular manifolds.☆39Updated last week
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- Landing repository for the paper "Softpick: No Attention Sink, No Massive Activations with Rectified Softmax"☆84Updated 3 weeks ago
- Fast, Modern, and Low Precision PyTorch Optimizers☆112Updated last month
- Simple implementation of muP, based on Spectral Condition for Feature Learning. The implementation is SGD only, dont use it for Adam☆85Updated last year
- Implementation of 2-simplicial attention proposed by Clift et al. (2019) and the recent attempt to make practical in Fast and Simplex, Ro…☆47Updated last month
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆70Updated last month
- Explorations into the recently proposed Taylor Series Linear Attention☆100Updated last year
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated 5 months ago
- Experiment of using Tangent to autodiff triton☆81Updated last year
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆189Updated 3 months ago
- WIP☆93Updated last year
- Here we will test various linear attention designs.☆62Updated last year
- ☆21Updated 10 months ago
- Griffin MQA + Hawk Linear RNN Hybrid☆89Updated last year
- Implementation of the proposed MaskBit from Bytedance AI☆82Updated 10 months ago
- Code accompanying the paper "Generalized Interpolating Discrete Diffusion"☆102Updated 3 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆130Updated 10 months ago
- Mixture of A Million Experts☆48Updated last year