Triton implement of bi-directional (non-causal) linear attention
☆75Mar 1, 2026Updated 2 months ago
Alternatives and similar repositories for flash-bidirectional-linear-attention
Users that are interested in flash-bidirectional-linear-attention are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Flash-Linear-Attention models beyond language☆21Aug 28, 2025Updated 8 months ago
- ☆17Dec 19, 2024Updated last year
- ☆70Jul 8, 2025Updated 9 months ago
- Reproducing Spiking Transformers using Briancog☆14Apr 10, 2025Updated last year
- [ICLR'25] "Understanding Bottlenecks of State Space Models through the Lens of Recency and Over-smoothing" by Peihao Wang, Ruisi Cai, Yue…☆17Mar 21, 2025Updated last year
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- APPy (Annotated Parallelism for Python) enables users to annotate loops and tensor expressions in Python with compiler directives akin to…☆30Mar 22, 2026Updated last month
- Open-sourcing code associated with the AAAI-25 paper "On the Expressiveness and Length Generalization of Selective State-Space Models on …☆16Sep 18, 2025Updated 7 months ago
- ☆19May 2, 2024Updated 2 years ago
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- ☆111Mar 12, 2026Updated last month
- Official implementation of Log-linear Sparse Attention (LLSA).☆70Feb 2, 2026Updated 3 months ago
- Official PyTorch Implementation of the Longhorn Deep State Space Model☆57Dec 4, 2024Updated last year
- Official PyTorch and Diffusers Implementation of "LinFusion: 1 GPU, 1 Minute, 16K Image"☆315Dec 23, 2024Updated last year
- 🚀 Efficient implementations for emerging model architectures☆5,032Updated this week
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- 🎉My Collections of CUDA Kernels~☆11Jun 25, 2024Updated last year
- [EMNLP 2023] Official implementation of the algorithm ETSC: Exact Toeplitz-to-SSM Conversion our EMNLP 2023 paper - Accelerating Toeplitz…☆14Oct 17, 2023Updated 2 years ago
- Awesome Triton Resources☆40Apr 27, 2025Updated last year
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆22Updated this week
- ☆114Feb 25, 2025Updated last year
- Linear Attention Sequence Parallelism (LASP)☆88Jun 4, 2024Updated last year
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆25Jun 6, 2024Updated last year
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆252Jun 6, 2025Updated 10 months ago
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆133Jun 24, 2025Updated 10 months ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- Code for CVPR 2024 Oral "Neural Lineage"☆17Jun 18, 2024Updated last year
- Here we will test various linear attention designs.☆62Apr 25, 2024Updated 2 years ago
- Official repository Flash Local Linear Attention☆23Apr 23, 2026Updated last week
- SGLang Kernel Wheel Index☆22Updated this week
- ☆17Jun 11, 2025Updated 10 months ago
- Mamba SSM architecture that supports training on variable-length sequences☆12Sep 1, 2025Updated 8 months ago
- (WIP) Parallel inference for black-forest-labs' FLUX model.☆19Nov 18, 2024Updated last year
- [ICLR 2025] Official PyTorch Implementation of Gated Delta Networks: Improving Mamba2 with Delta Rule☆558Mar 13, 2026Updated last month
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆993Feb 5, 2026Updated 3 months ago
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- AGaLiTe: Approximate Gated Linear Transformers for Online Reinforcement Learning (Published in TMLR)☆23Oct 15, 2024Updated last year
- ☆79Feb 4, 2025Updated last year
- [EMNLP 2022] Official implementation of Transnormer in our EMNLP 2022 paper - The Devil in Linear Transformer☆64Jul 30, 2023Updated 2 years ago
- Official repository of "Distort, Distract, Decode: Instruction-Tuned Model Can Refine its Response from Noisy Instructions", ICLR 2024 Sp…☆21Mar 7, 2024Updated 2 years ago
- Official implementation of "Hydra: Bidirectional State Space Models Through Generalized Matrix Mixers"☆171Jan 30, 2025Updated last year
- PyTorch implementation of paper "StyDeSty: Min-Max Stylization and Destylization for Single Domain Generalization" in ICML 2024.☆16Jun 4, 2024Updated last year
- A CUDA kernel for NHWC GroupNorm for PyTorch☆23Nov 15, 2024Updated last year