BobMcDear / attorch
A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.
☆524Updated last month
Alternatives and similar repositories for attorch:
Users that are interested in attorch are comparing it to the libraries listed below
- This repository contains the experimental PyTorch native float8 training UX☆222Updated 8 months ago
- ☆192Updated this week
- Fast low-bit matmul kernels in Triton☆275Updated this week
- Helpful tools and examples for working with flex-attention☆701Updated 2 weeks ago
- Cataloging released Triton kernels.☆212Updated 2 months ago
- Applied AI experiments and examples for PyTorch☆251Updated last week
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆509Updated 5 months ago
- ☆152Updated last year
- LLM KV cache compression made easy☆444Updated 2 weeks ago
- Flash Attention in ~100 lines of CUDA (forward pass only)☆764Updated 3 months ago
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆154Updated last week
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA problems☆239Updated this week
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆234Updated this week
- Mirage: Automatically Generating Fast GPU Kernels without Programming in Triton/CUDA☆778Updated this week
- ☆292Updated this week
- Annotated version of the Mamba paper☆478Updated last year
- ring-attention experiments☆128Updated 5 months ago
- Collection of kernels written in Triton language☆117Updated last month
- Fast Matrix Multiplications for Lookup Table-Quantized LLMs☆236Updated last month
- Pipeline Parallelism for PyTorch☆761Updated 7 months ago
- Large Context Attention☆696Updated 2 months ago
- UNet diffusion model in pure CUDA☆600Updated 9 months ago
- Minimalistic 4D-parallelism distributed training framework for education purpose☆970Updated 3 weeks ago
- Puzzles for learning Triton☆1,547Updated 4 months ago
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels☆858Updated this week
- Triton-based implementation of Sparse Mixture of Experts.☆209Updated 4 months ago
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆566Updated last month
- Microsoft Automatic Mixed Precision Library☆587Updated 6 months ago
- Ring attention implementation with flash attention☆721Updated last month
- Implementation of a Transformer, but completely in Triton☆263Updated 2 years ago