weishengying / cute_gemmView external linksLinks
☆21Aug 14, 2024Updated last year
Alternatives and similar repositories for cute_gemm
Users that are interested in cute_gemm are comparing it to the libraries listed below
Sorting:
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆78Aug 12, 2024Updated last year
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆22Updated this week
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆56Aug 12, 2024Updated last year
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆20Aug 3, 2025Updated 6 months ago
- CUDA 8-bit Tensor Core Matrix Multiplication based on m16n16k16 WMMA API☆35Sep 15, 2023Updated 2 years ago
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- Whisper in TensorRT-LLM☆17Sep 21, 2023Updated 2 years ago
- Lightweight Python Wrapper for OpenVINO, enabling LLM inference on NPUs☆27Dec 17, 2024Updated last year
- flash attention tutorial written in python, triton, cuda, cutlass☆486Jan 20, 2026Updated 3 weeks ago
- ☆162Feb 5, 2026Updated last week
- ☆52May 19, 2025Updated 8 months ago
- A practical way of learning Swizzle☆36Feb 3, 2025Updated last year
- ☆155Mar 4, 2025Updated 11 months ago
- ☆114May 16, 2025Updated 8 months ago
- Optimize softmax in triton in many cases☆22Sep 6, 2024Updated last year
- ☆65Apr 26, 2025Updated 9 months ago
- ☆261Jul 11, 2024Updated last year
- Standalone Flash Attention v2 kernel without libtorch dependency☆114Sep 10, 2024Updated last year
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆44Feb 27, 2025Updated 11 months ago
- Optimize GEMM with tensorcore step by step☆36Dec 17, 2023Updated 2 years ago
- TensorRT encapsulation, learn, rewrite, practice.☆29Oct 19, 2022Updated 3 years ago
- A simple high performance CUDA GEMM implementation.☆426Jan 4, 2024Updated 2 years ago
- The aim of this project is to develop a model capable of detecting fabric defection.☆10Dec 13, 2023Updated 2 years ago
- Matrix Multiply-Accumulate with CUDA and WMMA( Tensor Core)☆145Aug 18, 2020Updated 5 years ago
- ☆158Dec 26, 2024Updated last year
- Several optimization methods of half-precision general matrix multiplication (HGEMM) using tensor core with WMMA API and MMA PTX instruct…☆522Sep 8, 2024Updated last year
- ☆152Jan 9, 2025Updated last year
- CUTLASS and CuTe Examples☆127Nov 30, 2025Updated 2 months ago
- Lightweight framework for 3D rendering.☆11Jun 5, 2023Updated 2 years ago
- ☆49Apr 15, 2024Updated last year
- Fast low-bit matmul kernels in Triton☆429Feb 1, 2026Updated 2 weeks ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Jun 11, 2025Updated 8 months ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆96Sep 13, 2025Updated 5 months ago
- ☆10Jun 6, 2023Updated 2 years ago
- ☆10Apr 9, 2017Updated 8 years ago
- Created a simple neural network using C++17 standard and the Eigen library that supports both forward and backward propagation.☆10Jul 27, 2024Updated last year
- Persistent dense gemm for Hopper in `CuTeDSL`☆15Aug 9, 2025Updated 6 months ago
- 很好用的tnn classify demo☆11Mar 24, 2021Updated 4 years ago
- ☆10Nov 24, 2015Updated 10 years ago