This project is about convolution operator optimization on GPU, include GEMM based (Implicit GEMM) convolution.
☆42Sep 29, 2025Updated 5 months ago
Alternatives and similar repositories for conv_op_optimization
Users that are interested in conv_op_optimization are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- GPU implementation of Winograd convolution☆10Oct 23, 2017Updated 8 years ago
- ☆44Nov 1, 2025Updated 4 months ago
- ☆12Aug 31, 2023Updated 2 years ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆16Aug 31, 2023Updated 2 years ago
- Several optimization methods of half-precision general matrix multiplication (HGEMM) using tensor core with WMMA API and MMA PTX instruct…☆531Sep 8, 2024Updated last year
- ☆158Dec 26, 2024Updated last year
- Flash Attention in raw Cuda C beating PyTorch☆38May 14, 2024Updated last year
- New batched algorithm for sparse matrix-matrix multiplication (SpMM)☆16May 7, 2019Updated 6 years ago
- A simple high performance CUDA GEMM implementation.☆428Jan 4, 2024Updated 2 years ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆81Aug 12, 2024Updated last year
- Gensis is a lightweight deep learning framework written from scratch in Python, with Triton as its backend for high-performance computing…☆37Jan 15, 2026Updated 2 months ago
- Compiler plugin for performance analysis of HIP applications☆13Apr 7, 2025Updated 11 months ago
- Fast GPU based tensor core reductions☆13Jan 13, 2023Updated 3 years ago
- ☆119May 16, 2025Updated 10 months ago
- Implement Flash Attention using Cute.☆102Dec 17, 2024Updated last year
- Some of the fastest decoding range-based Asymetric Numeral Systems (rANS) codecs for x64☆20Sep 3, 2024Updated last year
- ☆30Nov 16, 2024Updated last year
- ☆61Feb 15, 2026Updated last month
- ☆11Feb 28, 2023Updated 3 years ago
- ☆13Nov 3, 2025Updated 4 months ago
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆59Aug 12, 2024Updated last year
- An implementation of SGEMV with performance comparable to cuBLAS.☆12May 21, 2021Updated 4 years ago
- CUDA 8-bit Tensor Core Matrix Multiplication based on m16n16k16 WMMA API☆35Sep 15, 2023Updated 2 years ago
- ☆49Apr 15, 2024Updated last year
- Optimizing SGEMM kernel functions on NVIDIA GPUs to a close-to-cuBLAS performance.☆407Jan 2, 2025Updated last year
- Implementation and optimization of matrix multiplication on single CPU (HPC-THU-2023-Autumn)☆18Feb 27, 2024Updated 2 years ago
- Mixed precision training from scratch with Tensors and CUDA☆28May 14, 2024Updated last year
- Recoil: Parallel rANS Decoding with Decoder-Adaptive Scalability☆18Jun 26, 2023Updated 2 years ago
- ☆14Jun 30, 2021Updated 4 years ago
- ☆12Dec 22, 2024Updated last year
- ☆14May 28, 2019Updated 6 years ago
- A vector field rendering library☆17Jul 31, 2019Updated 6 years ago
- ggml学习笔记,ggml是一个机器学习的推理框架☆18Mar 24, 2024Updated last year
- Examples illustrating usage of the rocBLAS library☆17Aug 12, 2024Updated last year
- This is a series of GPU optimization topics. Here we will introduce how to optimize the CUDA kernel in detail. I will introduce several…☆1,248Jul 29, 2023Updated 2 years ago
- Source code of the PPoPP '22 paper: "TileSpGEMM: A Tiled Algorithm for Parallel Sparse General Matrix-Matrix Multiplication on GPUs" by Y…☆46May 22, 2024Updated last year
- 本仓库在OpenVINO推理框架下部署Nanodet检测算法,并重写预处理和后处理部分,具有超高性能!让你在Intel CPU平台上的检测速 度起飞! 并基于NNCF和PPQ工具将模型量化(PTQ)至int8精度,推理速度更快!☆16Jun 14, 2023Updated 2 years ago
- FM-index representation of a de Bruijn graph☆26Aug 7, 2017Updated 8 years ago
- ☆120Apr 2, 2025Updated 11 months ago