Accelerated First Order Parallel Associative Scan
☆194Jan 7, 2026Updated last month
Alternatives and similar repositories for accelerated-scan
Users that are interested in accelerated-scan are comparing it to the libraries listed below
Sorting:
- ☆51Jan 28, 2024Updated 2 years ago
- Griffin MQA + Hawk Linear RNN Hybrid☆89Apr 26, 2024Updated last year
- Parallel Associative Scan for Language Models☆18Jan 8, 2024Updated 2 years ago
- Here we will test various linear attention designs.☆62Apr 25, 2024Updated last year
- ☆40Jan 5, 2024Updated 2 years ago
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Jun 6, 2024Updated last year
- HGRN2: Gated Linear RNNs with State Expansion☆56Aug 20, 2024Updated last year
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32May 25, 2024Updated last year
- Understand and test language model architectures on synthetic tasks.☆254Updated this week
- ☆53May 20, 2024Updated last year
- ☆24Sep 25, 2024Updated last year
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Se…☆67Apr 24, 2024Updated last year
- Code implementing "Efficient Parallelization of a Ubiquitious Sequential Computation" (Heinsen, 2023)☆98Dec 5, 2024Updated last year
- APPy (Annotated Parallelism for Python) enables users to annotate loops and tensor expressions in Python with compiler directives akin to…☆30Jan 28, 2026Updated last month
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Apr 17, 2024Updated last year
- JAX/Flax implementation of the Hyena Hierarchy☆34Apr 27, 2023Updated 2 years ago
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆595Aug 12, 2025Updated 6 months ago
- Implementations of various linear RNN layers using pytorch and triton☆54Aug 4, 2023Updated 2 years ago
- ☆11Oct 11, 2023Updated 2 years ago
- Official code for the paper "Attention as a Hypernetwork"☆48Jun 22, 2024Updated last year
- Stick-breaking attention☆62Jul 1, 2025Updated 8 months ago
- ☆45Apr 30, 2018Updated 7 years ago
- continous batching and parallel acceleration for RWKV6☆22Jun 28, 2024Updated last year
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆248Jun 6, 2025Updated 8 months ago
- A MAD laboratory to improve AI architecture designs 🧪☆138Dec 17, 2024Updated last year
- Non official implementation of the Linear Recurrent Unit (LRU, Orvieto et al. 2023)☆62Sep 3, 2025Updated 5 months ago
- ☆19Dec 4, 2025Updated 2 months ago
- Experiment of using Tangent to autodiff triton☆82Jan 22, 2024Updated 2 years ago
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,428Updated this week
- ☆106Mar 9, 2024Updated last year
- ☆124May 28, 2024Updated last year
- Annotated version of the Mamba paper☆497Feb 27, 2024Updated 2 years ago
- ☆17Dec 19, 2024Updated last year
- FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores☆343Dec 28, 2024Updated last year
- Code for the paper: https://arxiv.org/pdf/2309.06979.pdf☆21Jul 29, 2024Updated last year
- train with kittens!☆63Oct 25, 2024Updated last year
- Official Code Repository for the paper "Key-value memory in the brain"☆31Feb 25, 2025Updated last year
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆73May 26, 2024Updated last year