FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores
☆350Dec 28, 2024Updated last year
Alternatives and similar repositories for flash-fft-conv
Users that are interested in flash-fft-conv are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆561Dec 28, 2024Updated last year
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆251Jun 6, 2025Updated 10 months ago
- Butterfly matrix multiplication in PyTorch☆179Oct 5, 2023Updated 2 years ago
- Understand and test language model architectures on synthetic tasks.☆265Mar 22, 2026Updated 3 weeks ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Official implementation of "Hydra: Bidirectional State Space Models Through Generalized Matrix Mixers"☆170Jan 30, 2025Updated last year
- Accelerated First Order Parallel Associative Scan☆197Jan 7, 2026Updated 3 months ago
- Tile primitives for speedy kernels☆3,312Apr 8, 2026Updated last week
- ☆45Apr 30, 2018Updated 7 years ago
- An annotated implementation of the Hyena Hierarchy paper☆34May 28, 2023Updated 2 years ago
- ☆261Jul 11, 2024Updated last year
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆599Aug 12, 2025Updated 8 months ago
- Convolutions for Sequence Modeling☆911Jun 13, 2024Updated last year
- ☆35Apr 12, 2024Updated 2 years ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- ☆165Jan 24, 2023Updated 3 years ago
- ☆51Jan 28, 2024Updated 2 years ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆113Sep 10, 2024Updated last year
- ☆18Mar 18, 2024Updated 2 years ago
- Language Modeling with the H3 State Space Model☆522Sep 29, 2023Updated 2 years ago
- A Quirky Assortment of CuTe Kernels☆924Updated this week
- 🚀 Efficient implementations for emerging model architectures☆4,878Updated this week
- Some preliminary explorations of Mamba's context scaling.☆219Feb 8, 2024Updated 2 years ago
- FlashRNN - Fast RNN Kernels with I/O Awareness☆179Oct 20, 2025Updated 5 months ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- [NeurIPS 2023] Sparse Modular Activation for Efficient Sequence Modeling☆40Dec 2, 2023Updated 2 years ago
- Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE …☆118Mar 16, 2024Updated 2 years ago
- Code for the paper: https://arxiv.org/pdf/2309.06979.pdf☆21Jul 29, 2024Updated last year
- Triton-based implementation of Sparse Mixture of Experts.☆274Oct 3, 2025Updated 6 months ago
- Implementation of https://srush.github.io/annotated-s4☆515Jun 20, 2025Updated 9 months ago
- Official repository for ICML 2024 paper "MoRe Fine-Tuning with 10x Fewer Parameters"☆22Oct 14, 2025Updated 6 months ago
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆548May 16, 2025Updated 11 months ago
- HGRN2: Gated Linear RNNs with State Expansion☆57Aug 20, 2024Updated last year
- Helpful tools and examples for working with flex-attention☆1,174Updated this week
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Awesome Triton Resources☆39Apr 27, 2025Updated 11 months ago
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆75Aug 2, 2024Updated last year
- A collection of memory efficient attention operators implemented in the Triton language.☆289Jun 5, 2024Updated last year
- Building blocks for foundation models.☆614Jan 3, 2024Updated 2 years ago
- Triton Implementation of HyperAttention Algorithm☆48Dec 11, 2023Updated 2 years ago
- Fast Hadamard transform in CUDA, with a PyTorch interface☆304Mar 10, 2026Updated last month
- Implementation of GateLoop Transformer in Pytorch and Jax☆92Jun 18, 2024Updated last year