UDC-GAC / openCNN
A Winograd Minimal Filter Implementation in CUDA
☆24Updated 3 years ago
Alternatives and similar repositories for openCNN:
Users that are interested in openCNN are comparing it to the libraries listed below
- play gemm with tvm☆89Updated last year
- ☆39Updated 5 years ago
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆135Updated last year
- ☆91Updated 11 months ago
- Several optimization methods of half-precision general matrix vector multiplication (HGEMV) using CUDA core.☆57Updated 6 months ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆89Updated 3 weeks ago
- ☆37Updated 2 years ago
- Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.☆86Updated 2 years ago
- Dissecting NVIDIA GPU Architecture☆90Updated 2 years ago
- ☆29Updated 11 months ago
- ☆15Updated 5 years ago
- ☆69Updated 2 years ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆100Updated 8 months ago
- Artifacts of EVT ASPLOS'24☆24Updated last year
- An extension library of WMMA API (Tensor Core API)☆91Updated 8 months ago
- ☆137Updated 7 months ago
- A Vectorized N:M Format for Unleashing the Power of Sparse Tensor Cores☆50Updated last year
- Standalone Flash Attention v2 kernel without libtorch dependency☆106Updated 6 months ago
- ☆43Updated 4 years ago
- ☆17Updated 4 years ago
- CUDA 8-bit Tensor Core Matrix Multiplication based on m16n16k16 WMMA API☆28Updated last year
- This project is about convolution operator optimization on GPU, include GEMM based (Implicit GEMM) convolution.☆26Updated 2 months ago
- study of Ampere' Sparse Matmul☆17Updated 4 years ago
- Automatic Mapping Generation, Verification, and Exploration for ISA-based Spatial Accelerators☆107Updated 2 years ago
- ☆82Updated last year
- llama INT4 cuda inference with AWQ☆53Updated 2 months ago
- tophub autotvm log collections☆70Updated 2 years ago
- ☆48Updated 5 years ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆60Updated 7 months ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆59Updated 2 weeks ago