gpu-mode / ring-attentionLinks
ring-attention experiments
β165Updated last year
Alternatives and similar repositories for ring-attention
Users that are interested in ring-attention are comparing it to the libraries listed below
Sorting:
- β286Updated this week
- A bunch of kernels that might make stuff slower πβ75Updated this week
- Applied AI experiments and examples for PyTorchβ315Updated 5 months ago
- Cataloging released Triton kernels.β292Updated 5 months ago
- Ship correct and fast LLM kernels to PyTorchβ140Updated 3 weeks ago
- Triton-based implementation of Sparse Mixture of Experts.β263Updated 4 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.β324Updated this week
- Collection of kernels written in Triton language