gpu-mode / ring-attentionLinks
ring-attention experiments
☆160Updated last year
Alternatives and similar repositories for ring-attention
Users that are interested in ring-attention are comparing it to the libraries listed below
Sorting:
- ☆263Updated this week
- Triton-based implementation of Sparse Mixture of Experts.☆257Updated 2 months ago
- Cataloging released Triton kernels.☆278Updated 3 months ago
- Fast low-bit matmul kernels in Triton☆410Updated this week
- Applied AI experiments and examples for PyTorch☆311Updated 4 months ago
- Collection of kernels written in Triton language☆172Updated 8 months ago
- A bunch of kernels that might make stuff slower 😉☆69Updated this week
- Accelerating MoE with IO and Tile-aware Optimizations☆351Updated this week
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆301Updated this week
- This repository contains the experimental PyTorch native float8 training UX☆227Updated last year
- PyTorch bindings for CUTLASS grouped GEMM.☆135Updated 6 months ago
- extensible collectives library in triton☆91Updated 8 months ago
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆271Updated 3 weeks ago
- Ship correct and fast LLM kernels to PyTorch☆126Updated this week
- Triton-based Symmetric Memory operators and examples☆67Updated 2 months ago
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆216Updated last week
- ☆115Updated last year
- a minimal cache manager for PagedAttention, on top of llama3.☆127Updated last year
- Boosting 4-bit inference kernels with 2:4 Sparsity☆89Updated last year
- TPU inference for vLLM, with unified JAX and PyTorch support.☆199Updated this week
- Autonomous GPU Kernel Generation via Deep Agents☆187Updated this week
- ☆99Updated last year
- ☆133Updated 6 months ago
- JAX backend for SGL☆200Updated this week
- Load compute kernels from the Hub☆352Updated last week
- A minimal implementation of vllm.☆63Updated last year
- Learn CUDA with PyTorch☆124Updated 3 weeks ago
- kernels, of the mega variety☆631Updated 2 months ago
- JaxPP is a library for JAX that enables flexible MPMD pipeline parallelism for large-scale LLM training☆61Updated this week
- [ICLR'25] Fast Inference of MoE Models with CPU-GPU Orchestration☆249Updated last year