☆22Aug 14, 2024Updated last year
Alternatives and similar repositories for cute_gemm
Users that are interested in cute_gemm are comparing it to the libraries listed below
Sorting:
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆79Aug 12, 2024Updated last year
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆21Feb 9, 2026Updated 3 weeks ago
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆58Aug 12, 2024Updated last year
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆21Aug 3, 2025Updated 7 months ago
- CUDA 8-bit Tensor Core Matrix Multiplication based on m16n16k16 WMMA API☆35Sep 15, 2023Updated 2 years ago
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- Whisper in TensorRT-LLM☆17Sep 21, 2023Updated 2 years ago
- flash attention tutorial written in python, triton, cuda, cutlass☆490Jan 20, 2026Updated last month
- ☆168Feb 5, 2026Updated last month
- A practical way of learning Swizzle☆37Feb 3, 2025Updated last year
- ☆52May 19, 2025Updated 9 months ago
- ☆155Mar 4, 2025Updated last year
- ☆26Feb 17, 2025Updated last year
- ☆116May 16, 2025Updated 9 months ago
- Optimize softmax in triton in many cases☆23Sep 6, 2024Updated last year
- ☆65Apr 26, 2025Updated 10 months ago
- ☆262Jul 11, 2024Updated last year
- Standalone Flash Attention v2 kernel without libtorch dependency☆114Sep 10, 2024Updated last year
- CPU Memory Compiler and Parallel programing☆26Nov 18, 2024Updated last year
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆44Feb 27, 2025Updated last year
- Optimize GEMM with tensorcore step by step☆36Dec 17, 2023Updated 2 years ago
- TensorRT encapsulation, learn, rewrite, practice.☆30Oct 19, 2022Updated 3 years ago
- A simple high performance CUDA GEMM implementation.☆426Jan 4, 2024Updated 2 years ago
- Matrix Multiply-Accumulate with CUDA and WMMA( Tensor Core)☆144Aug 18, 2020Updated 5 years ago
- ☆159Dec 26, 2024Updated last year
- Several optimization methods of half-precision general matrix multiplication (HGEMM) using tensor core with WMMA API and MMA PTX instruct…☆526Sep 8, 2024Updated last year
- ☆152Jan 9, 2025Updated last year
- Official repository for the paper Local Linear Attention: An Optimal Interpolation of Linear and Softmax Attention For Test-Time Regressi…☆23Oct 1, 2025Updated 5 months ago
- ☆49Apr 15, 2024Updated last year
- Lightweight framework for 3D rendering.☆11Jun 5, 2023Updated 2 years ago
- CUTLASS and CuTe Examples☆133Nov 30, 2025Updated 3 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Jun 11, 2025Updated 8 months ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆96Feb 20, 2026Updated 2 weeks ago
- ☆13Jun 18, 2024Updated last year
- 很好用的tnn classify demo☆11Mar 24, 2021Updated 4 years ago
- Implemetation of "Pixel-In-Pixel Net: Towards Efficient Facial Landmark Detection in the Wild"☆11Jul 6, 2023Updated 2 years ago
- A c++ hash map/table which utilizes simd (specifically Intel x86 SSE/AVX)☆11Apr 30, 2019Updated 6 years ago
- The C++ matting code is based on BackgroundMattingV2 and RobustVideoMatting.☆11Nov 20, 2021Updated 4 years ago
- triton ver of gqa flash attn, based on the tutorial☆12Aug 4, 2024Updated last year