fla-org / flash-bidirectional-linear-attentionLinks
Triton implement of bi-directional (non-causal) linear attention
☆64Updated last year
Alternatives and similar repositories for flash-bidirectional-linear-attention
Users that are interested in flash-bidirectional-linear-attention are comparing it to the libraries listed below
Sorting:
- [ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning☆137Updated last month
- flex-block-attn: an efficient block sparse attention computation library☆107Updated last month
- Xmixers: A collection of SOTA efficient token/channel mixers☆28Updated 4 months ago
- Fast and memory-efficient exact kmeans☆136Updated 2 months ago
- Flash-Linear-Attention models beyond language☆21Updated 5 months ago
- [ICML 2025] Fourier Position Embedding: Enhancing Attention’s Periodic Extension for Length Generalization☆106Updated 8 months ago
- Here we will test various linear attention designs.☆62Updated last year
- A WebUI for Side-by-Side Comparison of Media (Images/Videos) Across Multiple Folders☆24Updated 11 months ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆35Updated last year
- [NeurIPS'25] dKV-Cache: The Cache for Diffusion Language Models☆129Updated 8 months ago
- Official Repo for Error-Free Linear Attention is a Free Lunch: Exact Solution from Continuous-Time Dynamics☆70Updated 3 weeks ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆229Updated 7 months ago
- ☆104Updated 11 months ago
- Pytorch implementation of "Oscillation-Reduced MXFP4 Training for Vision Transformers" on DeiT Model Pre-training☆35Updated 7 months ago
- RWKV-X is a Linear Complexity Hybrid Language Model based on the RWKV architecture, integrating Sparse Attention to improve the model's l…☆53Updated 3 weeks ago
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆67Updated last year
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆128Updated 7 months ago
- ☆270Updated 7 months ago
- ☆32Updated last year
- ☆47Updated last month
- ☆64Updated 6 months ago
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Updated last year
- Official repository for ICML 2024 paper "MoRe Fine-Tuning with 10x Fewer Parameters"☆22Updated 3 months ago
- ☆107Updated last year
- [ICLR 2024 Spotlight] This is the official PyTorch implementation of "EfficientDM: Efficient Quantization-Aware Fine-Tuning of Low-Bit Di…☆67Updated last year
- Tiny-FSDP, a minimalistic re-implementation of the PyTorch FSDP☆93Updated 5 months ago
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆104Updated last year
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆105Updated last year
- The this is the official implementation of "DAPE: Data-Adaptive Positional Encoding for Length Extrapolation"☆41Updated last year
- Official repository for the paper Local Linear Attention: An Optimal Interpolation of Linear and Softmax Attention For Test-Time Regressi…☆23Updated 4 months ago