☆120May 16, 2025Updated 11 months ago
Alternatives and similar repositories for Awesome-Cute
Users that are interested in Awesome-Cute are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆265Jul 11, 2024Updated last year
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆81Aug 12, 2024Updated last year
- Implement Flash Attention using Cute.☆106Dec 17, 2024Updated last year
- ☆175Feb 5, 2026Updated 3 months ago
- Examples of CUDA implementations by Cutlass CuTe☆272Jul 1, 2025Updated 10 months ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- Artifacts of EVT ASPLOS'24☆30Mar 6, 2024Updated 2 years ago
- flash attention tutorial written in python, triton, cuda, cutlass☆508Jan 20, 2026Updated 3 months ago
- ☆188May 7, 2025Updated last year
- A Easy-to-understand TensorOp Matmul Tutorial☆428Mar 5, 2026Updated 2 months ago
- ☆99May 31, 2025Updated 11 months ago
- DeeperGEMM: crazy optimized version☆86May 5, 2025Updated last year
- ☆49Apr 15, 2024Updated 2 years ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- A lightweight design for computation-communication overlap.☆231Jan 20, 2026Updated 3 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Jun 11, 2025Updated 10 months ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆96Feb 20, 2026Updated 2 months ago
- ☆65Feb 15, 2026Updated 2 months ago
- Several optimization methods of half-precision general matrix vector multiplication (HGEMV) using CUDA core.☆74Sep 8, 2024Updated last year
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆59Aug 12, 2024Updated last year
- Cute layout visualization☆38Jan 18, 2026Updated 3 months ago
- GEMV implementation with CUTLASS☆21Aug 21, 2025Updated 8 months ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆151May 10, 2025Updated 11 months ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Fastest kernels written from scratch☆576Sep 18, 2025Updated 7 months ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆109Jun 28, 2025Updated 10 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆46Feb 27, 2025Updated last year
- CUTLASS and CuTe Examples☆135Nov 30, 2025Updated 5 months ago
- Sample Codes using NVSHMEM on Multi-GPU☆30Jan 22, 2023Updated 3 years ago
- Several optimization methods of half-precision general matrix multiplication (HGEMM) using tensor core with WMMA API and MMA PTX instruct…☆544Sep 8, 2024Updated last year
- ☆98Mar 26, 2025Updated last year
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆193Jan 28, 2025Updated last year
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆119Mar 13, 2024Updated 2 years ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆762Aug 6, 2025Updated 9 months ago
- Applied AI experiments and examples for PyTorch☆321Aug 22, 2025Updated 8 months ago
- High-performance GEMM implementation optimized for NVIDIA H100 GPUs, leveraging Hopper architecture's TMA, WGMMA, and Thread Block Cluste…☆10Dec 4, 2024Updated last year
- ☆57Feb 24, 2026Updated 2 months ago
- This is a series of GPU optimization topics. Here we will introduce how to optimize the CUDA kernel in detail. I will introduce several…☆1,298Jul 29, 2023Updated 2 years ago
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,297Aug 28, 2025Updated 8 months ago
- ☆23Aug 20, 2025Updated 8 months ago