meta-pytorch / BackendBenchLinks
How to ensure correctness and ship LLM generated kernels in PyTorch
☆60Updated last week
Alternatives and similar repositories for BackendBench
Users that are interested in BackendBench are comparing it to the libraries listed below
Sorting:
- extensible collectives library in triton☆88Updated 6 months ago
- Triton-based Symmetric Memory operators and examples☆31Updated last week
- Write a fast kernel and run it on Discord. See how you compare against the best!☆57Updated last week
- ring-attention experiments☆152Updated 11 months ago
- Collection of kernels written in Triton language☆155Updated 5 months ago
- ☆90Updated 10 months ago
- Hydragen: High-Throughput LLM Inference with Shared Prefixes☆41Updated last year
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆99Updated 3 weeks ago
- DeeperGEMM: crazy optimized version☆71Updated 4 months ago
- Applied AI experiments and examples for PyTorch☆296Updated last month
- TritonParse: A Compiler Tracer, Visualizer, and mini-Reproducer (WIP) for Triton Kernels☆151Updated last week
- Framework to reduce autotune overhead to zero for well known deployments.☆84Updated 2 weeks ago
- Example of applying CUDA graphs to LLaMA-v2☆12Updated 2 years ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆230Updated this week
- ☆240Updated last week
- A bunch of kernels that might make stuff slower 😉☆59Updated last week
- Triton-based implementation of Sparse Mixture of Experts.☆241Updated last month
- Cataloging released Triton kernels.☆261Updated 3 weeks ago
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆142Updated last year
- PyTorch bindings for CUTLASS grouped GEMM.☆123Updated 4 months ago
- A parallel framework for training deep neural networks☆63Updated 6 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆82Updated last year
- NVIDIA NVSHMEM is a parallel programming interface for NVIDIA GPUs based on OpenSHMEM. NVSHMEM can significantly reduce multi-process com…☆324Updated 2 weeks ago
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆213Updated last week
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IP☆123Updated 3 weeks ago
- ☆217Updated 8 months ago
- Fast low-bit matmul kernels in Triton☆373Updated last week
- This repository contains companion software for the Colfax Research paper "Categorical Foundations for CuTe Layouts".☆48Updated last week
- ☆64Updated 5 months ago
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 6 months ago