srush / annotated-mamba
Annotated version of the Mamba paper
☆457Updated 8 months ago
Related projects ⓘ
Alternatives and complementary repositories for annotated-mamba
- Helpful tools and examples for working with flex-attention☆469Updated 3 weeks ago
- For optimization algorithm research and development.☆449Updated this week
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆476Updated 3 weeks ago
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆483Updated 3 weeks ago
- Understand and test language model architectures on synthetic tasks.☆162Updated 6 months ago
- Reading list for research topics in state-space models☆241Updated 2 weeks ago
- Simple, minimal implementation of the Mamba SSM in one pytorch file. Using logcumsumexp (Heisen sequence).☆102Updated last month
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAI☆256Updated last week
- Implementation of https://srush.github.io/annotated-s4☆469Updated last year
- Some preliminary explorations of Mamba's context scaling.☆191Updated 9 months ago
- Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters☆335Updated last week
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆537Updated 6 months ago
- Implementation of Diffusion Transformer (DiT) in JAX☆252Updated 5 months ago
- ☆292Updated 4 months ago
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆516Updated this week
- ☆197Updated 4 months ago
- Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton☆1,339Updated this week
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆214Updated 3 months ago
- FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores☆281Updated last month
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch☆571Updated last week
- Building blocks for foundation models.☆394Updated 10 months ago
- Accelerated First Order Parallel Associative Scan☆163Updated 3 months ago
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆293Updated 5 months ago
- Puzzles for exploring transformers☆325Updated last year
- What would you do with 1000 H100s...☆903Updated 10 months ago
- ☆133Updated 9 months ago
- 94% on CIFAR-10 in 2.6 seconds 💨 96% in 27 seconds☆177Updated last week
- A repository for log-time feedforward networks☆216Updated 7 months ago
- ☆161Updated last year
- TensorDict is a pytorch dedicated tensor container.☆840Updated this week