BobMcDear / attorchLinks
A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.
☆589Updated 4 months ago
Alternatives and similar repositories for attorch
Users that are interested in attorch are comparing it to the libraries listed below
Sorting:
- ☆268Updated last week
- Cataloging released Triton kernels.☆280Updated 3 months ago
- A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.☆697Updated this week
- Applied AI experiments and examples for PyTorch☆312Updated 4 months ago
- Fast low-bit matmul kernels in Triton☆413Updated 2 weeks ago
- A Quirky Assortment of CuTe Kernels☆724Updated last week
- This repository contains the experimental PyTorch native float8 training UX☆227Updated last year
- ☆178Updated last year
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆549Updated 7 months ago
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆462Updated this week
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆276Updated last month
- ring-attention experiments☆160Updated last year
- Helpful tools and examples for working with flex-attention☆1,095Updated last week
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆243Updated 7 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆306Updated last week
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark + Toolkit with Torch -> CUDA (+ more DSLs)☆728Updated this week
- Pipeline Parallelism for PyTorch☆783Updated last year
- ☆340Updated 3 weeks ago
- kernels, of the mega variety☆634Updated 3 months ago
- An open-source efficient deep learning framework/compiler, written in python.☆737Updated 3 months ago
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆153Updated 2 years ago
- Flash Attention in ~100 lines of CUDA (forward pass only)☆1,035Updated last year
- A library to analyze PyTorch traces.☆452Updated 2 weeks ago
- Annotated version of the Mamba paper☆493Updated last year
- Building blocks for foundation models.☆586Updated last year
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆217Updated 3 weeks ago
- Load compute kernels from the Hub☆354Updated 2 weeks ago
- Fastest kernels written from scratch☆501Updated 3 months ago
- Triton-based implementation of Sparse Mixture of Experts.☆259Updated 2 months ago
- Collection of kernels written in Triton language☆174Updated 8 months ago