fla-org / flash-bidirectional-linear-attention
Triton implement of bi-directional (non-causal) linear attention
☆44Updated last month
Alternatives and similar repositories for flash-bidirectional-linear-attention:
Users that are interested in flash-bidirectional-linear-attention are comparing it to the libraries listed below
- [CVPR 2025] Q-DiT: Accurate Post-Training Quantization for Diffusion Transformers☆40Updated 6 months ago
- ☆30Updated 9 months ago
- [NeurIPS 2024] Learning-to-Cache: Accelerating Diffusion Transformer via Layer Caching☆98Updated 8 months ago
- A big_vision inspired repo that implements a generic Auto-Encoder class capable in representation learning and generative modeling.☆34Updated 8 months ago
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆63Updated 11 months ago
- The official repo of continuous speculative decoding☆25Updated 4 months ago
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆27Updated 3 months ago
- ✈️ Accelerating Vision Diffusion Transformers with Skip Branches.☆61Updated 3 months ago
- This is the official PyTorch implementation of "ZipAR: Accelerating Auto-regressive Image Generation through Spatial Locality"☆46Updated 2 months ago
- ☆17Updated 2 months ago
- A WebUI for Side-by-Side Comparison of Media (Images/Videos) Across Multiple Folders☆21Updated last month
- [ICLR 2024 Spotlight] This is the official PyTorch implementation of "EfficientDM: Efficient Quantization-Aware Fine-Tuning of Low-Bit Di…☆59Updated 9 months ago
- Code for paper "Patch-Level Training for Large Language Models"☆81Updated 4 months ago
- 🔥 A minimal training framework for scaling FLA models☆82Updated this week
- The this is the official implementation of "DAPE: Data-Adaptive Positional Encoding for Length Extrapolation"☆37Updated 5 months ago
- PyTorch implementation of StableMask (ICML'24)☆12Updated 8 months ago
- FORA introduces simple yet effective caching mechanism in Diffusion Transformer Architecture for faster inference sampling.☆40Updated 8 months ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆30Updated 9 months ago
- GIFT: Generative Interpretable Fine-Tuning☆20Updated 5 months ago
- HGRN2: Gated Linear RNNs with State Expansion☆53Updated 7 months ago
- ☆101Updated last year
- [CVPR 2025] TinyFusion: Diffusion Transformers Learned Shallow☆86Updated 3 months ago
- Here we will test various linear attention designs.☆60Updated 10 months ago