HazyResearch / flash-fft-conv
FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores
☆281Updated last month
Related projects ⓘ
Alternatives and complementary repositories for flash-fft-conv
- This repository contains the experimental PyTorch native float8 training UX☆211Updated 3 months ago
- Helpful tools and examples for working with flex-attention☆469Updated 3 weeks ago
- Accelerated First Order Parallel Associative Scan☆163Updated 3 months ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆214Updated 3 months ago
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆476Updated 3 weeks ago
- ☆267Updated this week
- Annotated version of the Mamba paper☆457Updated 8 months ago
- ☆132Updated last year
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆483Updated 3 weeks ago
- A repository for log-time feedforward networks☆216Updated 7 months ago
- Fast Hadamard transform in CUDA, with a PyTorch interface☆111Updated 5 months ago
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆293Updated 5 months ago
- Understand and test language model architectures on synthetic tasks.☆162Updated 6 months ago
- A library for unit scaling in PyTorch☆105Updated 2 weeks ago
- Just some miscellaneous utility functions / decorators / modules related to Pytorch and Accelerate to help speed up implementation of new…☆119Updated 3 months ago
- Triton-based implementation of Sparse Mixture of Experts.☆185Updated last month
- Applied AI experiments and examples for PyTorch☆166Updated 2 weeks ago
- Cataloging released Triton kernels.☆134Updated 2 months ago
- Fast Matrix Multiplications for Lookup Table-Quantized LLMs☆187Updated this week
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆328Updated 3 weeks ago
- Simple and fast low-bit matmul kernels in CUDA / Triton☆143Updated this week
- Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"☆360Updated last year
- ☆228Updated 2 months ago
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆537Updated 6 months ago
- Implementation of fused cosine similarity attention in the same style as Flash Attention☆207Updated last year
- Collection of kernels written in Triton language☆68Updated 3 weeks ago
- ☆156Updated last year
- 94% on CIFAR-10 in 2.6 seconds 💨 96% in 27 seconds☆177Updated last week
- Some preliminary explorations of Mamba's context scaling.☆191Updated 9 months ago
- The AdEMAMix Optimizer: Better, Faster, Older.☆172Updated 2 months ago