ademeure / QuickRunCUDALinks
☆15Updated last month
Alternatives and similar repositories for QuickRunCUDA
Users that are interested in QuickRunCUDA are comparing it to the libraries listed below
Sorting:
- extensible collectives library in triton☆91Updated 8 months ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆104Updated 6 months ago
- DeeperGEMM: crazy optimized version☆73Updated 7 months ago
- Framework to reduce autotune overhead to zero for well known deployments.☆91Updated 3 months ago
- ☆53Updated 7 months ago
- ☆65Updated 8 months ago
- ☆39Updated 2 weeks ago
- ☆99Updated last year
- An experimental communicating attention kernel based on DeepEP.☆35Updated 5 months ago
- Ship correct and fast LLM kernels to PyTorch☆127Updated last week
- ☆52Updated 7 months ago
- Triton-based Symmetric Memory operators and examples☆70Updated 2 months ago
- ☆32Updated 5 months ago
- A bunch of kernels that might make stuff slower 😉☆69Updated this week
- Autonomous GPU Kernel Generation via Deep Agents☆192Updated last week
- Debug print operator for cudagraph debugging☆14Updated last year
- Automatic differentiation for Triton Kernels☆29Updated 4 months ago
- Benchmark tests supporting the TiledCUDA library.☆18Updated last year
- This repository contains companion software for the Colfax Research paper "Categorical Foundations for CuTe Layouts".☆83Updated 3 months ago
- Github mirror of trition-lang/triton repo.☆109Updated this week
- ☆115Updated 7 months ago
- Building the Virtuous Cycle for AI-driven LLM Systems☆100Updated this week
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆148Updated last month
- Efficient Long-context Language Model Training by Core Attention Disaggregation☆73Updated this week
- AMD RAD's multi-GPU Triton-based framework for seamless multi-GPU programming☆140Updated this week
- ☆67Updated last week
- ☆22Updated 5 months ago
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆150Updated 3 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆90Updated last year
- ☆125Updated 4 months ago