pytorch-labs / attention-gym
Helpful tools and examples for working with flex-attention
β583Updated this week
Alternatives and similar repositories for attention-gym:
Users that are interested in attention-gym are comparing it to the libraries listed below
- Implementation of π Ring Attention, from Liu et al. at Berkeley AI, in Pytorchβ492Updated 2 months ago
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.β505Updated 2 months ago
- Large Context Attentionβ670Updated 5 months ago
- Ring attention implementation with flash attentionβ645Updated 3 weeks ago
- Annotated version of the Mamba paperβ469Updated 10 months ago
- Pipeline Parallelism for PyTorchβ736Updated 4 months ago
- This repository contains the experimental PyTorch native float8 training UXβ219Updated 5 months ago
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793β376Updated last month
- Minimalistic 4D-parallelism distributed training framework for education purposeβ644Updated this week
- Triton-based implementation of Sparse Mixture of Experts.β192Updated last month
- LLM KV cache compression made easyβ303Updated this week
- π Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flashβ¦β215Updated this week
- π Efficient implementations of state-of-the-art linear attention models in Pytorch and Tritonβ1,669Updated this week
- Scalable and Performant Data Loadingβ207Updated this week
- β304Updated 2 weeks ago
- Efficient LLM Inference over Long Sequencesβ344Updated 2 weeks ago
- FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Coresβ292Updated 2 weeks ago
- For optimization algorithm research and development.β484Updated this week
- Applied AI experiments and examples for PyTorchβ211Updated this week
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorchβ297Updated 7 months ago
- Building blocks for foundation models.β435Updated last year
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAIβ270Updated 2 months ago
- Puzzles for learning Tritonβ1,300Updated last month
- Cataloging released Triton kernels.β155Updated last week
- Microsoft Automatic Mixed Precision Libraryβ549Updated 3 months ago
- Muon optimizer for neural networks: >30% extra sample efficiency, <3% wallclock overheadβ210Updated last week
- β135Updated last year
- β240Updated 4 months ago
- Flash Attention in ~100 lines of CUDA (forward pass only)β681Updated 2 weeks ago
- β170Updated this week