FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores
☆352Dec 28, 2024Updated last year
Alternatives and similar repositories for flash-fft-conv
Users that are interested in flash-fft-conv are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆562Dec 28, 2024Updated last year
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆252Jun 6, 2025Updated 11 months ago
- Butterfly matrix multiplication in PyTorch☆179Oct 5, 2023Updated 2 years ago
- Understand and test language model architectures on synthetic tasks.☆265Mar 22, 2026Updated last month
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Official implementation of "Hydra: Bidirectional State Space Models Through Generalized Matrix Mixers"☆171Jan 30, 2025Updated last year
- Accelerated First Order Parallel Associative Scan☆197Jan 7, 2026Updated 3 months ago
- Tile primitives for speedy kernels☆3,336Apr 29, 2026Updated last week
- ☆45Apr 30, 2018Updated 8 years ago
- An annotated implementation of the Hyena Hierarchy paper☆34May 28, 2023Updated 2 years ago
- ☆265Jul 11, 2024Updated last year
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆600Aug 12, 2025Updated 8 months ago
- Convolutions for Sequence Modeling☆912Jun 13, 2024Updated last year
- ☆35Apr 12, 2024Updated 2 years ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- ☆165Jan 24, 2023Updated 3 years ago
- ☆52Jan 28, 2024Updated 2 years ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆113Sep 10, 2024Updated last year
- ☆18Mar 18, 2024Updated 2 years ago
- Language Modeling with the H3 State Space Model☆523Sep 29, 2023Updated 2 years ago
- A Quirky Assortment of CuTe Kernels☆955Updated this week
- 🚀 Efficient implementations for emerging model architectures☆5,032Updated this week
- Some preliminary explorations of Mamba's context scaling.☆219Feb 8, 2024Updated 2 years ago
- FlashRNN - Fast RNN Kernels with I/O Awareness☆181Oct 20, 2025Updated 6 months ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- [NeurIPS 2023] Sparse Modular Activation for Efficient Sequence Modeling☆40Dec 2, 2023Updated 2 years ago
- Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE …☆118Mar 16, 2024Updated 2 years ago
- Code for the paper: https://arxiv.org/pdf/2309.06979.pdf☆21Jul 29, 2024Updated last year
- Triton-based implementation of Sparse Mixture of Experts.☆273Oct 3, 2025Updated 7 months ago
- Implementation of https://srush.github.io/annotated-s4☆515Jun 20, 2025Updated 10 months ago
- Official repository for ICML 2024 paper "MoRe Fine-Tuning with 10x Fewer Parameters"☆22Oct 14, 2025Updated 6 months ago
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆548May 16, 2025Updated 11 months ago
- HGRN2: Gated Linear RNNs with State Expansion☆57Aug 20, 2024Updated last year
- Helpful tools and examples for working with flex-attention☆1,182Apr 13, 2026Updated 3 weeks ago
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Awesome Triton Resources☆41Apr 27, 2025Updated last year
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆75Aug 2, 2024Updated last year
- A collection of memory efficient attention operators implemented in the Triton language.☆290Jun 5, 2024Updated last year
- Building blocks for foundation models.☆617Jan 3, 2024Updated 2 years ago
- Triton Implementation of HyperAttention Algorithm☆48Dec 11, 2023Updated 2 years ago
- Fast Hadamard transform in CUDA, with a PyTorch interface☆311Mar 10, 2026Updated last month
- Triton implement of bi-directional (non-causal) linear attention☆75Mar 1, 2026Updated 2 months ago