lxxue / prefix_sumLinks
A PyTorch wrapper of parallel exclusive scan in CUDA
☆12Updated 2 years ago
Alternatives and similar repositories for prefix_sum
Users that are interested in prefix_sum are comparing it to the libraries listed below
Sorting:
- ☆40Updated last year
- Efficient PScan implementation in PyTorch☆17Updated last year
- Accelerated First Order Parallel Associative Scan☆193Updated last year
- Sequence Modeling with Multiresolution Convolutional Memory (ICML 2023)☆127Updated 2 years ago
- Parallel Associative Scan for Language Models☆18Updated last year
- ☆33Updated last year
- Code implementing "Efficient Parallelization of a Ubiquitious Sequential Computation" (Heinsen, 2023)☆98Updated last year
- ☆62Updated last year
- ☆44Updated 7 years ago
- Implementations of various linear RNN layers using pytorch and triton☆54Updated 2 years ago
- Experiment of using Tangent to autodiff triton☆81Updated last year
- Supporting code for the blog post on modular manifolds.☆107Updated 2 months ago
- ☆50Updated last week
- ☆32Updated last year
- ☆51Updated last year
- ☆34Updated last year
- JAX bindings for Flash Attention v2☆101Updated this week
- Implementation of GateLoop Transformer in Pytorch and Jax☆91Updated last year
- Expanding linear RNN state-transition matrix eigenvalues to include negatives improves state-tracking tasks and language modeling without…☆18Updated 9 months ago
- Blog post☆17Updated last year
- Pytorch implementation of preconditioned stochastic gradient descent (Kron and affine preconditioner, low-rank approximation precondition…☆188Updated last week
- A library for unit scaling in PyTorch☆133Updated 5 months ago
- A State-Space Model with Rational Transfer Function Representation.☆83Updated last year
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆69Updated last year
- Triton Implementation of HyperAttention Algorithm☆48Updated 2 years ago
- ☆43Updated last month
- Official code for the paper "Attention as a Hypernetwork"☆46Updated last year
- Non official implementation of the Linear Recurrent Unit (LRU, Orvieto et al. 2023)☆61Updated 3 months ago
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Updated last year
- PyTorch implementation of the Flash Spectral Transform Unit.☆21Updated last year