gpu-mode / ring-attention
ring-attention experiments
☆123Updated 3 months ago
Alternatives and similar repositories for ring-attention:
Users that are interested in ring-attention are comparing it to the libraries listed below
- Cataloging released Triton kernels.☆164Updated last month
- ☆175Updated this week
- Applied AI experiments and examples for PyTorch☆223Updated this week
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆86Updated this week
- Triton-based implementation of Sparse Mixture of Experts.☆196Updated 2 months ago
- This repository contains the experimental PyTorch native float8 training UX☆221Updated 6 months ago
- Fast low-bit matmul kernels in Triton☆231Updated this week
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆221Updated this week
- ☆141Updated last year
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA problems☆166Updated this week
- extensible collectives library in triton☆82Updated 4 months ago
- Fast Matrix Multiplications for Lookup Table-Quantized LLMs☆228Updated this week
- ☆99Updated 5 months ago
- Collection of kernels written in Triton language☆97Updated this week
- Boosting 4-bit inference kernels with 2:4 Sparsity☆64Updated 5 months ago
- ☆75Updated 7 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆114Updated 2 months ago
- ☆88Updated 8 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆64Updated 3 months ago
- ☆67Updated 3 months ago
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆187Updated this week
- Experiment of using Tangent to autodiff triton☆75Updated last year
- ☆65Updated 2 months ago
- ☆86Updated 11 months ago
- Official repository for LightSeq: Sequence Level Parallelism for Distributed Training of Long Context Transformers☆205Updated 5 months ago
- Odysseus: Playground of LLM Sequence Parallelism☆64Updated 7 months ago
- Simple and efficient pytorch-native transformer training and inference (batched)☆68Updated 10 months ago
- a minimal cache manager for PagedAttention, on top of llama3.☆67Updated 5 months ago
- ☆180Updated 7 months ago
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆67Updated 8 months ago