Helpful tools and examples for working with flex-attention
β1,161Feb 8, 2026Updated last month
Alternatives and similar repositories for attention-gym
Users that are interested in attention-gym are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A PyTorch native platform for training generative AI modelsβ5,191Updated this week
- π Efficient implementations of state-of-the-art linear attention modelsβ4,692Updated this week
- Tile primitives for speedy kernelsβ3,244Mar 17, 2026Updated last week
- Ring attention implementation with flash attentionβ998Sep 10, 2025Updated 6 months ago
- FlashInfer: Kernel Library for LLM Servingβ5,231Updated this week
- Virtual machines for every use case on DigitalOcean β’ AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Distributed Compiler based on Triton for Parallel Systemsβ1,398Mar 11, 2026Updated 2 weeks ago
- Efficient Triton Kernels for LLM Trainingβ6,242Updated this week
- A sparse attention kernel supporting mix sparse patternsβ485Jan 18, 2026Updated 2 months ago
- PyTorch native quantization and sparsity for training and inferenceβ2,746Updated this week
- π³ Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"β978Feb 5, 2026Updated last month
- A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.β809Mar 23, 2026Updated last week
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.β598Aug 12, 2025Updated 7 months ago
- Using FlexAttention to compute attention with different masking patternsβ47Sep 22, 2024Updated last year
- A Quirky Assortment of CuTe Kernelsβ863Mar 22, 2026Updated last week
- DigitalOcean Gradient AI Platform β’ AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Fast and memory-efficient exact attentionβ22,938Updated this week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on Hβ¦β3,246Updated this week
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Trainingβ723Updated this week
- Applied AI experiments and examples for PyTorchβ319Aug 22, 2025Updated 7 months ago
- Hackable and optimized Transformers building blocks, supporting a composable construction.β10,388Mar 18, 2026Updated last week
- Minimalistic large language model 3D-parallelism trainingβ2,626Feb 19, 2026Updated last month
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inferenceβ653Jan 15, 2026Updated 2 months ago
- [ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.β969Feb 25, 2026Updated last month
- FlexAttention w/ FlashAttention3 Supportβ27Oct 5, 2024Updated last year
- Managed hosting for WordPress and PHP on Cloudways β’ AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- Puzzles for learning Tritonβ2,348Mar 18, 2026Updated last week
- π₯ A minimal training framework for scaling FLA modelsβ359Nov 15, 2025Updated 4 months ago
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoringβ274Jul 6, 2025Updated 8 months ago
- Large Context Attentionβ769Oct 13, 2025Updated 5 months ago
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.β1,273Aug 28, 2025Updated 7 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimizerβ247Jun 15, 2025Updated 9 months ago
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernelsβ5,432Updated this week
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.β335Updated this week
- Mirage Persistent Kernel: Compiling LLMs into a MegaKernelβ2,175Updated this week
- DigitalOcean Gradient AI Platform β’ AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.β6,186Aug 22, 2025Updated 7 months ago
- [ICLR2025, ICML2025, NeurIPS2025 Spotlight] Quantized Attention achieves speedup of 2-5x compared to FlashAttention, without losing end-tβ¦β3,249Jan 17, 2026Updated 2 months ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attentionβ¦β1,203Mar 9, 2026Updated 3 weeks ago
- β136May 29, 2025Updated 10 months ago
- Triton implementation of FlashAttention2 that adds Custom Masks.β170Aug 14, 2024Updated last year
- β109Mar 12, 2026Updated 2 weeks ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.β1,045Sep 4, 2024Updated last year