buttercutter / Mamba_SSMLinks
A simple implementation of [Mamba: Linear-Time Sequence Modeling with Selective State Spaces](https://arxiv.org/abs/2312.00752)
☆22Updated last year
Alternatives and similar repositories for Mamba_SSM
Users that are interested in Mamba_SSM are comparing it to the libraries listed below
Sorting:
- MambaFormer in-context learning experiments and implementation for https://arxiv.org/abs/2402.04248☆57Updated last year
- 32 times longer context window than vanilla Transformers and up to 4 times longer than memory efficient Transformers.☆48Updated 2 years ago
- Triton Implementation of HyperAttention Algorithm☆48Updated last year
- Xmixers: A collection of SOTA efficient token/channel mixers☆29Updated 2 months ago
- Here we will test various linear attention designs.☆61Updated last year
- Experimental scripts for researching data adaptive learning rate scheduling.☆22Updated 2 years ago
- HGRN2: Gated Linear RNNs with State Expansion☆55Updated last year
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆45Updated 2 years ago
- Triton implement of bi-directional (non-causal) linear attention☆56Updated 9 months ago
- A repository for DenseSSMs☆89Updated last year
- ☆23Updated last year
- Official code for the paper "Image generation with shortest path diffusion" accepted at ICML 2023.☆24Updated 2 years ago
- Implementation of 2-simplicial attention proposed by Clift et al. (2019) and the recent attempt to make practical in Fast and Simplex, Ro…☆47Updated 2 months ago
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆119Updated last year
- ☆41Updated 4 years ago
- ImageNet-12k subset of ImageNet-21k (fall11)☆21Updated 2 years ago
- Linear Attention Sequence Parallelism (LASP)☆87Updated last year
- FlexAttention w/ FlashAttention3 Support☆27Updated last year
- [NeurIPS 2023] The PyTorch Implementation of Scheduled (Stable) Weight Decay.☆61Updated last year
- ☆105Updated last year
- Implementation of IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (ICLR 2024).☆25Updated 4 months ago
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Updated last year
- Explorations into the recently proposed Taylor Series Linear Attention☆99Updated last year
- Official code for the paper "Attention as a Hypernetwork"☆46Updated last year
- Contextual Position Encoding but with some custom CUDA Kernels https://arxiv.org/abs/2405.18719☆22Updated last year
- ☆16Updated 2 years ago
- Measuring the Signal to Noise Ratio in Language Model Evaluation☆25Updated 3 months ago
- Explorations into adversarial losses on top of autoregressive loss for language modeling☆38Updated 8 months ago
- Directed masked autoencoders☆14Updated 2 years ago
- [EMNLP 2022] Official implementation of Transnormer in our EMNLP 2022 paper - The Devil in Linear Transformer☆63Updated 2 years ago