krafton-ai / mambaformer-iclLinks
MambaFormer in-context learning experiments and implementation for https://arxiv.org/abs/2402.04248
☆57Updated last year
Alternatives and similar repositories for mambaformer-icl
Users that are interested in mambaformer-icl are comparing it to the libraries listed below
Sorting:
- Implementation of Infini-Transformer in Pytorch☆113Updated 11 months ago
- HGRN2: Gated Linear RNNs with State Expansion☆55Updated last year
- [ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning☆134Updated last month
- Xmixers: A collection of SOTA efficient token/channel mixers☆29Updated 3 months ago
- Here we will test various linear attention designs.☆62Updated last year
- ☆106Updated last year
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Se…☆66Updated last year
- Official code for the paper "Attention as a Hypernetwork"☆46Updated last year
- [ICLR 2025] Official Code Release for Explaining Modern Gated-Linear RNNs via a Unified Implicit Attention Formulation☆47Updated 9 months ago
- Efficient PScan implementation in PyTorch☆17Updated last year
- A repository for DenseSSMs☆89Updated last year
- ☆24Updated last year
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Updated last year
- Awesome Triton Resources☆38Updated 7 months ago
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆120Updated last year
- ☆32Updated last year
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆69Updated last year
- ☆33Updated last year
- ☆50Updated last year
- Official implementation of Phi-Mamba. A MOHAWK-distilled model (Transformers to SSMs: Distilling Quadratic Knowledge to Subquadratic Mode…☆116Updated last year
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆132Updated last month
- ☆38Updated last year
- Explorations into the recently proposed Taylor Series Linear Attention☆100Updated last year
- Curse-of-memory phenomenon of RNNs in sequence modelling☆19Updated 7 months ago
- Implementations of various linear RNN layers using pytorch and triton☆54Updated 2 years ago
- 32 times longer context window than vanilla Transformers and up to 4 times longer than memory efficient Transformers.☆49Updated 2 years ago
- Stick-breaking attention☆61Updated 5 months ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆56Updated 2 years ago
- ☆57Updated last year
- Code for "Theoretical Foundations of Deep Selective State-Space Models" (NeurIPS 2024)☆15Updated 11 months ago