gpu-mode / ring-attentionLinks
ring-attention experiments
☆155Updated last year
Alternatives and similar repositories for ring-attention
Users that are interested in ring-attention are comparing it to the libraries listed below
Sorting:
- ☆246Updated this week
- Cataloging released Triton kernels.☆265Updated 2 months ago
- A bunch of kernels that might make stuff slower 😉☆64Updated this week
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆277Updated this week
- Collection of kernels written in Triton language☆161Updated 7 months ago
- Fast low-bit matmul kernels in Triton☆392Updated 2 weeks ago
- Triton-based implementation of Sparse Mixture of Experts.☆248Updated last month
- Applied AI experiments and examples for PyTorch☆302Updated 2 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆126Updated 5 months ago
- extensible collectives library in triton☆90Updated 7 months ago
- ☆93Updated last year
- This repository contains the experimental PyTorch native float8 training UX☆223Updated last year
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆216Updated this week
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆271Updated last week
- a minimal cache manager for PagedAttention, on top of llama3.☆125Updated last year
- How to ensure correctness and ship LLM generated kernels in PyTorch☆114Updated last week
- ☆130Updated 5 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆206Updated 4 months ago
- ☆112Updated last year
- Boosting 4-bit inference kernels with 2:4 Sparsity☆85Updated last year
- Triton-based Symmetric Memory operators and examples☆61Updated 3 weeks ago
- A Quirky Assortment of CuTe Kernels☆651Updated 2 weeks ago
- JAX backend for SGL☆146Updated this week
- Load compute kernels from the Hub☆326Updated this week
- Learn CUDA with PyTorch☆104Updated this week
- TritonBench: Benchmarking Large Language Model Capabilities for Generating Triton Operators☆93Updated 4 months ago
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆128Updated this week
- Odysseus: Playground of LLM Sequence Parallelism☆78Updated last year
- ☆176Updated last year
- ☆225Updated 3 weeks ago