pytorch-labs / attention-gym
Helpful tools and examples for working with flex-attention
β689Updated last week
Alternatives and similar repositories for attention-gym:
Users that are interested in attention-gym are comparing it to the libraries listed below
- Implementation of π Ring Attention, from Liu et al. at Berkeley AI, in Pytorchβ506Updated 4 months ago
- Muon optimizer: +>30% sample efficiency with <3% wallclock overheadβ505Updated last week
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.β524Updated last month
- Large Context Attentionβ690Updated last month
- Annotated version of the Mamba paperβ475Updated last year
- Ring attention implementation with flash attentionβ711Updated 3 weeks ago
- [ICLR2025 Spotlightπ₯] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parametersβ535Updated last month
- Scalable and Performant Data Loadingβ227Updated this week
- π Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flashβ¦β232Updated 2 weeks ago
- β381Updated 2 weeks ago
- Pipeline Parallelism for PyTorch