srush / annotated-mambaLinks
Annotated version of the Mamba paper
β482Updated last year
Alternatives and similar repositories for annotated-mamba
Users that are interested in annotated-mamba are comparing it to the libraries listed below
Sorting:
- Implementation of π Ring Attention, from Liu et al. at Berkeley AI, in Pytorchβ514Updated 2 weeks ago
- Implementation of https://srush.github.io/annotated-s4β495Updated 2 years ago
- Helpful tools and examples for working with flex-attentionβ811Updated this week
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.β544Updated this week
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAIβ282Updated 2 months ago
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"β550Updated 5 months ago
- Understand and test language model architectures on synthetic tasks.β197Updated 2 months ago
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jaxβ586Updated this week
- Implementation of Diffusion Transformer (DiT) in JAXβ276Updated 11 months ago
- β267Updated 10 months ago
- Some preliminary explorations of Mamba's context scaling.β212Updated last year
- For optimization algorithm research and development.β518Updated this week
- Puzzles for exploring transformersβ347Updated 2 years ago
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorchβ333Updated 11 months ago
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorchβ681Updated 6 months ago
- β290Updated 5 months ago
- Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorchβ408Updated 4 months ago
- [ICLR2025 Spotlightπ₯] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parametersβ559Updated 3 months ago
- A MAD laboratory to improve AI architecture designs π§ͺβ116Updated 5 months ago
- Language Modeling with the H3 State Space Modelβ518Updated last year
- What would you do with 1000 H100s...β1,048Updated last year
- Muon optimizer: +>30% sample efficiency with <3% wallclock overheadβ661Updated last week
- β190Updated this week
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"β234Updated 3 months ago
- A repository for log-time feedforward networksβ220Updated last year
- β166Updated last year
- Simple, minimal implementation of the Mamba SSM in one pytorch file. Using logcumsumexp (Heisen sequence).β117Updated 7 months ago
- β286Updated last month
- Accelerated First Order Parallel Associative Scanβ181Updated 9 months ago
- Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(nΒ²) Memory"β378Updated last year