srush / annotated-mamba
Annotated version of the Mamba paper
☆445Updated 6 months ago
Related projects: ⓘ
- Helpful tools and examples for working with flex-attention☆341Updated last month
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆530Updated 3 months ago
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆452Updated last week
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆451Updated last month
- ☆278Updated 3 weeks ago
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆278Updated 3 months ago
- Implementation of https://srush.github.io/annotated-s4☆456Updated last year
- ☆288Updated 2 months ago
- Reading list for research topics in state-space models☆209Updated last week
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆248Updated 10 months ago
- Some preliminary explorations of Mamba's context scaling.☆184Updated 7 months ago
- ☆428Updated last month
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆492Updated this week
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch☆528Updated last week
- Puzzles for exploring transformers☆293Updated last year
- Building blocks for foundation models.☆345Updated 8 months ago
- Code repository for Black Mamba☆218Updated 7 months ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆206Updated last month
- What would you do with 1000 H100s...☆816Updated 8 months ago
- A repository for log-time feedforward networks☆215Updated 5 months ago
- Implementation of Diffusion Transformer (DiT) in JAX☆245Updated 3 months ago
- Code repository for the paper - "Matryoshka Representation Learning"☆398Updated 7 months ago
- MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvement…☆321Updated 2 weeks ago
- Understand and test language model architectures on synthetic tasks.☆156Updated 4 months ago
- Simple, minimal implementation of the Mamba SSM in one pytorch file. More efficient than using for loops, but probably less efficient tha…☆89Updated 5 months ago
- Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton☆1,190Updated this week
- FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores☆263Updated 7 months ago
- ☆171Updated last week
- GPT-2 (124M) quality in 5B tokens☆227Updated last week
- ☆168Updated this week