buttercutter / Mamba_SSM
A simple implementation of [Mamba: Linear-Time Sequence Modeling with Selective State Spaces](https://arxiv.org/abs/2312.00752)
☆21Updated 11 months ago
Alternatives and similar repositories for Mamba_SSM:
Users that are interested in Mamba_SSM are comparing it to the libraries listed below
- Official code for the paper "Attention as a Hypernetwork"☆23Updated 6 months ago
- ☆24Updated 3 months ago
- HGRN2: Gated Linear RNNs with State Expansion☆52Updated 4 months ago
- Triton Implementation of HyperAttention Algorithm☆46Updated last year
- ☆29Updated 2 years ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆28Updated 7 months ago
- Hacks for PyTorch☆18Updated last year
- Xmixers: A collection of SOTA efficient token/channel mixers☆12Updated 2 months ago
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆44Updated last year
- 32 times longer context window than vanilla Transformers and up to 4 times longer than memory efficient Transformers.☆44Updated last year
- ☆15Updated last year
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Updated 7 months ago
- The official repository for our paper "The Dual Form of Neural Networks Revisited: Connecting Test Time Predictions to Training Patterns …☆16Updated last year
- APPy (Annotated Parallelism for Python) enables users to annotate loops and tensor expressions in Python with compiler directives akin to…☆21Updated 3 weeks ago
- Implementation of the Kalman Filtering Attention proposed in "Kalman Filtering Attention for User Behavior Modeling in CTR Prediction"☆57Updated last year
- ☆15Updated last week
- Official Code Repository for the paper "Key-value memory in the brain"☆20Updated last week
- ☆47Updated 6 months ago
- Directed masked autoencoders☆14Updated last year
- Official code for the paper "Image generation with shortest path diffusion" accepted at ICML 2023.☆22Updated last year
- ☆31Updated 7 months ago
- Implementation of IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (ICLR 2024).☆22Updated 7 months ago
- Here we will test various linear attention designs.☆58Updated 8 months ago
- Simple notebooks to learn diffusion models on toy datasets☆17Updated last year
- Linear Attention Sequence Parallelism (LASP)☆74Updated 7 months ago
- ☆32Updated last year
- ☆36Updated 7 months ago
- Implementation of Insertion-deletion Denoising Diffusion Probabilistic Models☆30Updated 2 years ago
- ☆16Updated last year