lxxue / prefix_sum
A PyTorch wrapper of parallel exclusive scan in CUDA
☆9Updated last year
Alternatives and similar repositories for prefix_sum:
Users that are interested in prefix_sum are comparing it to the libraries listed below
- Parallel Associative Scan for Language Models☆18Updated last year
- Efficient PScan implementation in PyTorch☆15Updated last year
- ☆52Updated 4 months ago
- Blog post☆16Updated last year
- ☆37Updated last year
- ☆31Updated 10 months ago
- Accelerated First Order Parallel Associative Scan☆171Updated 6 months ago
- Experiment of using Tangent to autodiff triton☆75Updated last year
- ☆33Updated last year
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆44Updated last year
- Parallelizing non-linear sequential models over the sequence length☆50Updated last month
- ☆29Updated 2 years ago
- ☆47Updated last year
- Experiments on the impact of depth in transformers and SSMs.☆22Updated 3 months ago
- Implementation of GateLoop Transformer in Pytorch and Jax☆87Updated 8 months ago
- Implementations of various linear RNN layers using pytorch and triton☆49Updated last year
- ☆30Updated 2 months ago
- Sequence Modeling with Multiresolution Convolutional Memory (ICML 2023)☆122Updated last year
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆60Updated 4 months ago
- ☆20Updated last year
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Updated 8 months ago
- Here we will test various linear attention designs.☆58Updated 9 months ago
- The simplest but fast implementation of matrix multiplication in CUDA.☆34Updated 6 months ago
- ☆42Updated 6 years ago
- ☆51Updated 9 months ago
- ☆29Updated 4 months ago
- JAX bindings for Flash Attention v2☆85Updated 7 months ago
- [ICML 2024] SIRFShampoo: Structured inverse- and root-free Shampoo in PyTorch (https://arxiv.org/abs/2402.03496)☆14Updated 3 months ago
- FlashRNN - Fast RNN Kernels with I/O Awareness☆75Updated 2 months ago
- Code for the paper: Why Transformers Need Adam: A Hessian Perspective☆49Updated 9 months ago