claCase / Attention-as-RNN
Non-official implementation of "Attention as an RNN" from https://arxiv.org/pdf/2405.13956, efficient associative parallel prefix scan and recurrent version implemented.
☆17Updated last month
Related projects: ⓘ
- Implementation of Griffin from the paper: "Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models"☆48Updated last week
- PyTorch implementation of models from the Zamba2 series.☆63Updated last month
- GoldFinch and other hybrid transformer components☆38Updated 2 months ago
- Implementation of MoE Mamba from the paper: "MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts" in Pytorch and Ze…☆73Updated last week
- My implementation of the original transformer model (Vaswani et al.). I've additionally included the playground.py file for visualizing o…☆36Updated 9 months ago
- ☆30Updated 3 months ago
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆46Updated 5 months ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆34Updated 10 months ago
- A simple but robust PyTorch implementation of RetNet from "Retentive Network: A Successor to Transformer for Large Language Models" (http…☆101Updated 9 months ago
- Implementation of GateLoop Transformer in Pytorch and Jax☆86Updated 3 months ago
- Implementation of MambaFormer in Pytorch ++ Zeta from the paper: "Can Mamba Learn How to Learn? A Comparative Study on In-Context Learnin…☆20Updated last week
- Implementation of Switch Transformers from the paper: "Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficien…☆44Updated last week
- Linear Attention Sequence Parallelism (LASP)☆64Updated 3 months ago
- PyTorch Implementation of Jamba: "Jamba: A Hybrid Transformer-Mamba Language Model"☆120Updated last week
- This code implements a Radial Basis Function (RBF) based Kolmogorov-Arnold Network (KAN) for function approximation.☆24Updated 3 months ago
- Toy genetic algorithm in Pytorch☆28Updated 6 months ago
- ☆42Updated this week
- Explorations into the recently proposed Taylor Series Linear Attention☆85Updated last month
- Implementation of Spectral State Space Models☆16Updated 6 months ago
- Evaluating the Mamba architecture on the Othello game☆41Updated 4 months ago
- Official implementation of Phi-Mamba. A MOHAWK-distilled model (Transformers to SSMs: Distilling Quadratic Knowledge to Subquadratic Mode…☆61Updated last week
- An EXA-Scale repository of Multi-Modality AI resources from papers and models, to foundational libraries!☆42Updated 7 months ago
- Here we will test various linear attention designs.☆55Updated 4 months ago
- Integrating Mamba/SSMs with Transformer for Enhanced Long Context and High-Quality Sequence Modeling☆153Updated last week
- The official repository for HyperZ⋅Z⋅W Operator Connects Slow-Fast Networks for Full Context Interaction.☆29Updated this week
- ☆11Updated last year
- Implementation of Agent Attention in Pytorch☆83Updated 2 months ago
- Implementation of the Mamba SSM with hf_integration.☆55Updated 3 weeks ago
- Kolmogorov-Arnold Networks (KAN) using Jacobi polynomials instead of B-splines.☆28Updated 4 months ago
- ☆72Updated 3 months ago