HazyResearch / flash-fft-conv
FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores
☆311Updated 3 months ago
Alternatives and similar repositories for flash-fft-conv:
Users that are interested in flash-fft-conv are comparing it to the libraries listed below
- This repository contains the experimental PyTorch native float8 training UX☆223Updated 8 months ago
- Accelerated First Order Parallel Associative Scan☆180Updated 7 months ago
- Helpful tools and examples for working with flex-attention☆720Updated this week
- When it comes to optimizers, it's always better to be safe than sorry☆217Updated 2 weeks ago
- Annotated version of the Mamba paper☆481Updated last year
- Fast Hadamard transform in CUDA, with a PyTorch interface☆168Updated 10 months ago
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆510Updated 5 months ago
- Collection of kernels written in Triton language☆118Updated last week
- Triton-based implementation of Sparse Mixture of Experts.☆211Updated 4 months ago
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆529Updated last month
- ☆143Updated last year
- Some preliminary explorations of Mamba's context scaling.☆212Updated last year
- A repository for log-time feedforward networks☆221Updated last year
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAI☆279Updated 3 weeks ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆229Updated 2 months ago
- The AdEMAMix Optimizer: Better, Faster, Older.☆180Updated 7 months ago
- FlashRNN - Fast RNN Kernels with I/O Awareness☆82Updated 3 weeks ago
- ☆295Updated this week
- ☆262Updated last month
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆230Updated 2 months ago
- Pipeline Parallelism for PyTorch☆762Updated 7 months ago
- A library for unit scaling in PyTorch☆125Updated 4 months ago
- Muon optimizer: +>30% sample efficiency with <3% wallclock overhead☆560Updated 3 weeks ago
- Applied AI experiments and examples for PyTorch☆258Updated 3 weeks ago
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆240Updated this week
- Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"☆375Updated last year
- Fast low-bit matmul kernels in Triton☆285Updated this week
- Cataloging released Triton kernels.☆216Updated 3 months ago
- The official implementation of Tensor ProducT ATTenTion Transformer (T6)☆357Updated last week
- Understand and test language model architectures on synthetic tasks.☆191Updated last month