hazan-lab / flash-stuLinks
PyTorch implementation of the Flash Spectral Transform Unit.
☆20Updated last year
Alternatives and similar repositories for flash-stu
Users that are interested in flash-stu are comparing it to the libraries listed below
Sorting:
- FlexAttention w/ FlashAttention3 Support☆27Updated last year
- Transformers components but in Triton☆34Updated 6 months ago
- APPy (Annotated Parallelism for Python) enables users to annotate loops and tensor expressions in Python with compiler directives akin to…☆27Updated this week
- Awesome Triton Resources☆38Updated 7 months ago
- Parallel Associative Scan for Language Models☆18Updated last year
- Fast and memory-efficient exact attention☆74Updated 9 months ago
- ☆23Updated 7 months ago
- ☆41Updated last month
- Here we will test various linear attention designs.☆62Updated last year
- Xmixers: A collection of SOTA efficient token/channel mixers☆29Updated 3 months ago
- Benchmark tests supporting the TiledCUDA library.☆17Updated last year
- Efficient PScan implementation in PyTorch☆17Updated last year
- Flash-Linear-Attention models beyond language☆20Updated 3 months ago
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆77Updated last week
- Experiment of using Tangent to autodiff triton☆80Updated last year
- A bunch of kernels that might make stuff slower 😉☆65Updated this week
- ☆32Updated last year
- The evaluation framework for training-free sparse attention in LLMs☆106Updated last month
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆126Updated 5 months ago
- ☆33Updated last year
- Official Project Page for HLA: Higher-order Linear Attention (https://arxiv.org/abs/2510.27258)☆36Updated 3 weeks ago
- Triton Implementation of HyperAttention Algorithm☆48Updated last year
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Updated last year
- continous batching and parallel acceleration for RWKV6☆22Updated last year
- Code for the paper: https://arxiv.org/pdf/2309.06979.pdf☆21Updated last year
- RADLADS training code☆34Updated 7 months ago
- Linear Attention Sequence Parallelism (LASP)☆87Updated last year
- ☆22Updated last year
- Expanding linear RNN state-transition matrix eigenvalues to include negatives improves state-tracking tasks and language modeling without…☆17Updated 8 months ago
- Quantized Attention on GPU☆44Updated last year