gpu-mode / ring-attentionLinks
ring-attention experiments
☆154Updated last year
Alternatives and similar repositories for ring-attention
Users that are interested in ring-attention are comparing it to the libraries listed below
Sorting:
- ☆240Updated this week
- Cataloging released Triton kernels.☆263Updated last month
- Collection of kernels written in Triton language☆157Updated 6 months ago
- Fast low-bit matmul kernels in Triton☆381Updated 3 weeks ago
- Triton-based implementation of Sparse Mixture of Experts.☆246Updated 2 weeks ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆258Updated last week
- Applied AI experiments and examples for PyTorch☆299Updated 2 months ago
- This repository contains the experimental PyTorch native float8 training UX☆223Updated last year
- A bunch of kernels that might make stuff slower 😉☆62Updated this week
- ☆92Updated 11 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆124Updated 4 months ago
- extensible collectives library in triton☆89Updated 6 months ago
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆270Updated 2 months ago
- ☆130Updated 4 months ago
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆215Updated this week
- a minimal cache manager for PagedAttention, on top of llama3.☆123Updated last year
- Triton-based Symmetric Memory operators and examples☆38Updated this week
- Learn CUDA with PyTorch☆92Updated 3 weeks ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆83Updated last year
- ☆112Updated last year
- kernels, of the mega variety☆586Updated 3 weeks ago
- How to ensure correctness and ship LLM generated kernels in PyTorch☆66Updated last week
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆195Updated 4 months ago
- A Quirky Assortment of CuTe Kernels☆627Updated last week
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆119Updated 3 weeks ago
- ☆174Updated last year
- A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.☆389Updated this week
- TPU inference for vLLM, with unified JAX and PyTorch support.☆97Updated this week
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆233Updated 5 months ago
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆145Updated 2 years ago