flashinfer-ai / cutlass-vizLinks
☆64Updated 5 months ago
Alternatives and similar repositories for cutlass-viz
Users that are interested in cutlass-viz are comparing it to the libraries listed below
Sorting:
- DeeperGEMM: crazy optimized version☆71Updated 4 months ago
- ☆50Updated 4 months ago
- Tile-based language built for AI computation across all scales☆61Updated this week
- An experimental communicating attention kernel based on DeepEP.☆34Updated 2 months ago
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆130Updated 2 weeks ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆118Updated 4 months ago
- ☆98Updated last year
- A GPU-optimized system for efficient long-context LLMs decoding with low-bit KV cache.☆60Updated last month
- Implement Flash Attention using Cute.☆95Updated 9 months ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆97Updated 3 months ago
- Debug print operator for cudagraph debugging☆13Updated last year
- ☆38Updated last month
- DLSlime: Flexible & Efficient Heterogeneous Transfer Toolkit☆66Updated this week
- Quantized Attention on GPU☆44Updated 10 months ago
- Triton adapter for Ascend. Mirror of https://gitee.com/ascend/triton-ascend☆75Updated this week
- ☆95Updated 6 months ago
- A lightweight design for computation-communication overlap.☆177Updated 2 weeks ago
- ☆30Updated 3 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆43Updated 3 months ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆75Updated last year
- Framework to reduce autotune overhead to zero for well known deployments.☆84Updated 2 weeks ago
- ☆82Updated 8 months ago
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆58Updated 6 months ago
- ☆112Updated last month
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆99Updated 3 weeks ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆110Updated last year
- ☆98Updated 4 months ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆94Updated 2 weeks ago
- PyTorch bindings for CUTLASS grouped GEMM.☆121Updated 4 months ago
- ☆42Updated 4 months ago