fla-org / flash-bidirectional-linear-attentionLinks
Triton implement of bi-directional (non-causal) linear attention
☆65Updated last week
Alternatives and similar repositories for flash-bidirectional-linear-attention
Users that are interested in flash-bidirectional-linear-attention are comparing it to the libraries listed below
Sorting:
- flex-block-attn: an efficient block sparse attention computation library☆108Updated last month
- Xmixers: A collection of SOTA efficient token/channel mixers☆28Updated 5 months ago
- [ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning☆137Updated last month
- Here we will test various linear attention designs.☆62Updated last year
- Fast and memory-efficient exact kmeans☆138Updated last week
- Flash-Linear-Attention models beyond language☆21Updated 5 months ago
- [NeurIPS'25] dKV-Cache: The Cache for Diffusion Language Models☆129Updated 8 months ago
- ☆106Updated last year
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆67Updated last year
- [ICML 2025] Fourier Position Embedding: Enhancing Attention’s Periodic Extension for Length Generalization☆108Updated 8 months ago
- ☆32Updated last year
- A WebUI for Side-by-Side Comparison of Media (Images/Videos) Across Multiple Folders☆25Updated 11 months ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆35Updated last year
- ☆270Updated 8 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆233Updated 7 months ago
- The this is the official implementation of "DAPE: Data-Adaptive Positional Encoding for Length Extrapolation"☆41Updated last year
- A repository for DenseSSMs☆89Updated last year
- [ICLR 2024 Spotlight] This is the official PyTorch implementation of "EfficientDM: Efficient Quantization-Aware Fine-Tuning of Low-Bit Di…☆68Updated last year
- ☆105Updated 11 months ago
- ☆48Updated last month
- ☆66Updated 7 months ago
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆104Updated last year
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆128Updated 7 months ago
- Official Repo for Error-Free Linear Attention is a Free Lunch: Exact Solution from Continuous-Time Dynamics☆71Updated 3 weeks ago
- RWKV-X is a Linear Complexity Hybrid Language Model based on the RWKV architecture, integrating Sparse Attention to improve the model's l…☆54Updated last month
- Pytorch implementation of "Oscillation-Reduced MXFP4 Training for Vision Transformers" on DeiT Model Pre-training☆36Updated 7 months ago
- Patch convolution to avoid large GPU memory usage of Conv2D☆95Updated last year
- AdaSplash: Adaptive Sparse Flash Attention (aka Flash Entmax Attention)☆32Updated 4 months ago
- [ICML 2025] SparseLoRA: Accelerating LLM Fine-Tuning with Contextual Sparsity☆70Updated 7 months ago
- Low-bit optimizers for PyTorch☆138Updated 2 years ago