gpu-mode / ring-attention
ring-attention experiments
☆127Updated 5 months ago
Alternatives and similar repositories for ring-attention:
Users that are interested in ring-attention are comparing it to the libraries listed below
- Cataloging released Triton kernels.☆204Updated 2 months ago
- This repository contains the experimental PyTorch native float8 training UX☆222Updated 7 months ago
- ☆191Updated this week
- Applied AI experiments and examples for PyTorch☆249Updated this week
- PyTorch bindings for CUTLASS grouped GEMM.☆74Updated 4 months ago
- Triton-based implementation of Sparse Mixture of Experts.☆207Updated 3 months ago
- Collection of kernels written in Triton language☆114Updated last month
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆104Updated this week
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA problems☆234Updated this week
- Fast low-bit matmul kernels in Triton☆267Updated this week
- ☆101Updated 6 months ago
- extensible collectives library in triton☆84Updated 6 months ago