lxxue / prefix_sumLinks
A PyTorch wrapper of parallel exclusive scan in CUDA
☆12Updated 2 years ago
Alternatives and similar repositories for prefix_sum
Users that are interested in prefix_sum are comparing it to the libraries listed below
Sorting:
- ☆40Updated last year
- ☆57Updated 11 months ago
- Sequence Modeling with Multiresolution Convolutional Memory (ICML 2023)☆127Updated last year
- Accelerated First Order Parallel Associative Scan☆188Updated last year
- Parallel Associative Scan for Language Models☆18Updated last year
- Code implementing "Efficient Parallelization of a Ubiquitious Sequential Computation" (Heinsen, 2023)☆94Updated 9 months ago
- ☆32Updated 11 months ago
- Efficient PScan implementation in PyTorch☆16Updated last year
- ☆44Updated 7 years ago
- Experiment of using Tangent to autodiff triton☆81Updated last year
- ☆32Updated last year
- Implementation of GateLoop Transformer in Pytorch and Jax☆90Updated last year
- Pytorch implementation of preconditioned stochastic gradient descent (Kron and affine preconditioner, low-rank approximation precondition…☆180Updated last week
- A library for unit scaling in PyTorch☆130Updated 2 months ago
- Implementations of various linear RNN layers using pytorch and triton☆53Updated 2 years ago
- An implementation of the Llama architecture, to instruct and delight☆21Updated 3 months ago
- ☆39Updated 2 weeks ago
- A State-Space Model with Rational Transfer Function Representation.☆81Updated last year
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆44Updated 2 years ago
- Blog post☆17Updated last year
- ☆34Updated last year
- JAX bindings for Flash Attention v2☆91Updated last week
- ☆83Updated last year
- ☆49Updated last year
- ☆42Updated 5 months ago
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆88Updated last year
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆46Updated last year
- Here we will test various linear attention designs.☆62Updated last year
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆67Updated 11 months ago
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆69Updated last month