HazyResearch / flash-fft-convView external linksLinks
FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores
☆341Dec 28, 2024Updated last year
Alternatives and similar repositories for flash-fft-conv
Users that are interested in flash-fft-conv are comparing it to the libraries listed below
Sorting:
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- ☆45Apr 30, 2018Updated 7 years ago
- Accelerated First Order Parallel Associative Scan☆196Jan 7, 2026Updated last month
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆248Jun 6, 2025Updated 8 months ago
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆562Dec 28, 2024Updated last year
- An annotated implementation of the Hyena Hierarchy paper☆34May 28, 2023Updated 2 years ago
- Understand and test language model architectures on synthetic tasks.☆252Jan 12, 2026Updated last month
- Official implementation of "Hydra: Bidirectional State Space Models Through Generalized Matrix Mixers"☆169Jan 30, 2025Updated last year
- Tile primitives for speedy kernels☆3,139Updated this week
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆595Aug 12, 2025Updated 6 months ago
- Butterfly matrix multiplication in PyTorch☆178Oct 5, 2023Updated 2 years ago
- Awesome Triton Resources☆39Apr 27, 2025Updated 9 months ago
- Convolutions for Sequence Modeling☆911Jun 13, 2024Updated last year
- ☆261Jul 11, 2024Updated last year
- [NeurIPS 2023] Sparse Modular Activation for Efficient Sequence Modeling☆40Dec 2, 2023Updated 2 years ago
- ☆163Jan 24, 2023Updated 3 years ago
- Language Modeling with the H3 State Space Model☆522Sep 29, 2023Updated 2 years ago
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,379Updated this week
- ☆51Jan 28, 2024Updated 2 years ago
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆75Aug 2, 2024Updated last year
- Code for the paper: https://arxiv.org/pdf/2309.06979.pdf☆21Jul 29, 2024Updated last year
- FlashRNN - Fast RNN Kernels with I/O Awareness☆174Oct 20, 2025Updated 3 months ago
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆73May 26, 2024Updated last year
- Some preliminary explorations of Mamba's context scaling.☆218Feb 8, 2024Updated 2 years ago
- Helpful tools and examples for working with flex-attention☆1,127Updated this week
- ☆18Mar 18, 2024Updated last year
- Implementation of GateLoop Transformer in Pytorch and Jax☆92Jun 18, 2024Updated last year
- Official PyTorch Implementation of the Longhorn Deep State Space Model☆56Dec 4, 2024Updated last year
- HGRN2: Gated Linear RNNs with State Expansion☆56Aug 20, 2024Updated last year
- Viterbi decoding in PyTorch☆40Sep 10, 2025Updated 5 months ago
- ☆35Apr 12, 2024Updated last year
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆549May 16, 2025Updated 8 months ago
- Triton-based implementation of Sparse Mixture of Experts.☆265Oct 3, 2025Updated 4 months ago
- Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE …☆116Mar 16, 2024Updated last year
- Fast Hadamard transform in CUDA, with a PyTorch interface☆284Oct 19, 2025Updated 3 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆114Sep 10, 2024Updated last year
- Triton implement of bi-directional (non-causal) linear attention☆65Feb 2, 2026Updated last week
- Parallel Associative Scan for Language Models☆18Jan 8, 2024Updated 2 years ago
- A MAD laboratory to improve AI architecture designs 🧪☆138Dec 17, 2024Updated last year