buttercutter / Mamba_SSM
A simple implementation of [Mamba: Linear-Time Sequence Modeling with Selective State Spaces](https://arxiv.org/abs/2312.00752)
☆21Updated last year
Alternatives and similar repositories for Mamba_SSM:
Users that are interested in Mamba_SSM are comparing it to the libraries listed below
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Updated 8 months ago
- 32 times longer context window than vanilla Transformers and up to 4 times longer than memory efficient Transformers.☆46Updated last year
- Simple notebooks to learn diffusion models on toy datasets☆17Updated 2 years ago
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆44Updated last year
- Triton Implementation of HyperAttention Algorithm☆47Updated last year
- Implementation of a modular, high-performance, and simplistic mamba for high-speed applications☆33Updated 3 months ago
- Hacks for PyTorch☆18Updated last year
- The official GitHub page for the survey paper "A Survey of RWKV".☆22Updated last month
- APPy (Annotated Parallelism for Python) enables users to annotate loops and tensor expressions in Python with compiler directives akin to…☆23Updated last week
- Directed masked autoencoders☆14Updated 2 years ago
- Here we will test various linear attention designs.☆59Updated 10 months ago
- ☆24Updated 5 months ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 2 years ago
- ☆30Updated 9 months ago
- Implementation of a Light Recurrent Unit in Pytorch☆47Updated 4 months ago
- ☆29Updated 2 years ago
- Official code for the paper "Attention as a Hypernetwork"☆24Updated 8 months ago
- Implementation of IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (ICLR 2024).☆22Updated 8 months ago
- ☆33Updated last year
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆36Updated last year
- ☆46Updated last year
- Explorations into the recently proposed Taylor Series Linear Attention☆93Updated 6 months ago
- possibly useful materials for learning RWKV language model.☆24Updated last year
- Toy genetic algorithm in Pytorch☆33Updated last month
- ☆20Updated last year
- Linear Attention Sequence Parallelism (LASP)☆79Updated 9 months ago
- IntLLaMA: A fast and light quantization solution for LLaMA☆18Updated last year