sustcsonglin / linear-attention-and-beyond-slidesView external linksLinks
☆106Feb 25, 2025Updated 11 months ago
Alternatives and similar repositories for linear-attention-and-beyond-slides
Users that are interested in linear-attention-and-beyond-slides are comparing it to the libraries listed below
Sorting:
- PyTorch implementation of the Flash Spectral Transform Unit.☆21Sep 19, 2024Updated last year
- ☆35Mar 7, 2025Updated 11 months ago
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- ☆118May 19, 2025Updated 8 months ago
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆965Feb 5, 2026Updated last week
- Measuring the Signal to Noise Ratio in Language Model Evaluation☆28Aug 19, 2025Updated 5 months ago
- ☆129Jun 6, 2025Updated 8 months ago
- ☆65Apr 26, 2025Updated 9 months ago
- Source-to-Source Debuggable Derivatives in Pure Python☆15Jan 23, 2024Updated 2 years ago
- ☆97Mar 26, 2025Updated 10 months ago
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,379Updated this week
- ☆32Jul 2, 2025Updated 7 months ago
- [ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning☆137Dec 19, 2025Updated last month
- ☆52May 19, 2025Updated 8 months ago
- Welcome to the 'In Context Learning Theory' Reading Group☆30Nov 8, 2024Updated last year
- [WIP] Better (FP8) attention for Hopper☆32Feb 24, 2025Updated 11 months ago
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆269Jul 6, 2025Updated 7 months ago
- ☆52Jun 6, 2024Updated last year
- ☆38Aug 7, 2025Updated 6 months ago
- Triton implement of bi-directional (non-causal) linear attention☆65Feb 2, 2026Updated 2 weeks ago
- PyTorch compilation tutorial covering TorchScript, torch.fx, and Slapo☆17Mar 13, 2023Updated 2 years ago
- ☆20Nov 4, 2025Updated 3 months ago
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆258Aug 9, 2025Updated 6 months ago
- 🔥 A minimal training framework for scaling FLA models☆344Nov 15, 2025Updated 3 months ago
- Xmixers: A collection of SOTA efficient token/channel mixers☆28Sep 4, 2025Updated 5 months ago
- Efficient triton implementation of Native Sparse Attention.☆263May 23, 2025Updated 8 months ago
- Official repository for the paper Local Linear Attention: An Optimal Interpolation of Linear and Softmax Attention For Test-Time Regressi…☆23Oct 1, 2025Updated 4 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆235Jun 15, 2025Updated 8 months ago
- ☆21Mar 3, 2025Updated 11 months ago
- A sparse attention kernel supporting mix sparse patterns☆455Jan 18, 2026Updated 3 weeks ago
- Distributed Compiler based on Triton for Parallel Systems☆1,350Feb 9, 2026Updated last week
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Mar 24, 2025Updated 10 months ago
- ☆66Jul 8, 2025Updated 7 months ago
- [ACL 2023] Are Pre-trained Language Models Useful for Model Ensemble in Chinese Grammatical Error Correction?☆10Dec 15, 2025Updated 2 months ago
- Automated bottleneck detection and solution orchestration☆19Updated this week
- Image Tokenizer Needs Post-Training☆24Oct 4, 2025Updated 4 months ago
- Persistent dense gemm for Hopper in `CuTeDSL`☆15Aug 9, 2025Updated 6 months ago
- ☆13Jun 18, 2024Updated last year