BobMcDear / attorch
A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.
☆534Updated this week
Alternatives and similar repositories for attorch:
Users that are interested in attorch are comparing it to the libraries listed below
- This repository contains the experimental PyTorch native float8 training UX☆223Updated 8 months ago
- Cataloging released Triton kernels.☆217Updated 3 months ago
- ☆200Updated this week
- Helpful tools and examples for working with flex-attention☆726Updated 2 weeks ago
- Applied AI experiments and examples for PyTorch☆262Updated last month
- Fast low-bit matmul kernels in Triton☆291Updated this week
- Mirage: Automatically Generating Fast GPU Kernels without Programming in Triton/CUDA☆808Updated this week
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆165Updated last month
- Flash Attention in ~100 lines of CUDA (forward pass only)☆786Updated 3 months ago
- ☆153Updated last year
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆511Updated 6 months ago
- Large Context Attention☆704Updated 3 months ago
- Puzzles for learning Triton☆1,591Updated 5 months ago
- LLM KV cache compression made easy☆463Updated last week
- ☆295Updated last week
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA problems☆274Updated last week
- Annotated version of the Mamba paper☆483Updated last year
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆240Updated last week
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆192Updated this week
- Collection of kernels written in Triton language☆120Updated 3 weeks ago
- Building blocks for foundation models.☆482Updated last year
- Pipeline Parallelism for PyTorch☆764Updated 8 months ago
- Fastest kernels written from scratch☆236Updated 3 weeks ago
- Ring attention implementation with flash attention☆743Updated 2 weeks ago
- Scalable and Performant Data Loading☆237Updated last week
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆594Updated 2 months ago
- Minimalistic 4D-parallelism distributed training framework for education purpose☆991Updated last month
- ring-attention experiments☆130Updated 6 months ago
- Triton-based implementation of Sparse Mixture of Experts.☆210Updated 4 months ago
- Implementation of a Transformer, but completely in Triton☆263Updated 3 years ago