kyegomez / SimpleMamba
Implementation of a modular, high-performance, and simplistic mamba for high-speed applications
☆33Updated last week
Related projects ⓘ
Alternatives and complementary repositories for SimpleMamba
- Implementation of a Light Recurrent Unit in Pytorch☆46Updated last month
- Implementation of MambaFormer in Pytorch ++ Zeta from the paper: "Can Mamba Learn How to Learn? A Comparative Study on In-Context Learnin…☆21Updated last week
- Implementation of the general framework for AMIE, from the paper "Towards Conversational Diagnostic AI", out of Google Deepmind☆53Updated 2 months ago
- Implementation of MoE Mamba from the paper: "MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts" in Pytorch and Ze…☆84Updated last week
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆109Updated last month
- HGRN2: Gated Linear RNNs with State Expansion☆49Updated 3 months ago
- Simple Implementation of TinyGPTV in super simple Zeta lego blocks☆15Updated last week
- Implementation of the proposed Adam-atan2 from Google Deepmind in Pytorch☆94Updated this week
- Implementation of Infini-Transformer in Pytorch☆104Updated last month
- The open source implementation of the model from "Scaling Vision Transformers to 22 Billion Parameters"☆25Updated last week
- ☆33Updated 5 months ago
- Exploration into the proposed "Self Reasoning Tokens" by Felipe Bonetto☆53Updated 6 months ago
- Explorations into the recently proposed Taylor Series Linear Attention☆90Updated 3 months ago
- Implementation of Agent Attention in Pytorch☆86Updated 4 months ago
- The official repository for HyperZ⋅Z⋅W Operator Connects Slow-Fast Networks for Full Context Interaction.☆31Updated 2 months ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆93Updated last month
- Utilities for PyTorch distributed☆23Updated last year
- Implementation of GateLoop Transformer in Pytorch and Jax☆86Updated 5 months ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆36Updated last year
- Implementation of a multimodal diffusion transformer in Pytorch☆97Updated 5 months ago
- Exploring an idea where one forgets about efficiency and carries out attention across each edge of the nodes (tokens)☆43Updated last month
- Here we will test various linear attention designs.☆56Updated 6 months ago
- This library supports evaluating disparities in generated image quality, diversity, and consistency between geographic regions.☆20Updated 5 months ago
- Explorations into improving ViTArc with Slot Attention☆37Updated last month
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆50Updated 7 months ago
- More dimensions = More fun☆21Updated 3 months ago
- A simple implementation of [Mamba: Linear-Time Sequence Modeling with Selective State Spaces](https://arxiv.org/abs/2312.00752)☆20Updated 10 months ago
- Code and benchmark for the paper: "A Practitioner's Guide to Continual Multimodal Pretraining" [NeurIPS'24]☆35Updated 2 months ago