dianhsu / swin-transformer-cpp
Swin Transformer C++ Implementation
☆53Updated 3 years ago
Related projects: ⓘ
- CUDA Templates for Linear Algebra Subroutines☆90Updated 4 months ago
- 用C++实现一个简单的Transformer模型。 Attention Is All You Need。☆37Updated 3 years ago
- Several optimization methods of half-precision general matrix vector multiplication (HGEMV) using CUDA core.☆40Updated last week
- CUDA 8-bit Tensor Core Matrix Multiplication based on m16n16k16 WMMA API☆22Updated last year
- ☆32Updated 3 months ago
- play gemm with tvm☆81Updated last year
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆82Updated 6 months ago
- A set of examples around MegEngine☆29Updated 9 months ago
- ☆15Updated last week
- CUDA 6大并行计算模式 代码与笔记☆57Updated 4 years ago
- llama INT4 cuda inference with AWQ☆46Updated 2 months ago
- ☆56Updated this week
- ☆90Updated 6 months ago
- ☆100Updated 5 months ago
- A Winograd Minimal Filter Implementation in CUDA☆20Updated 3 years ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆93Updated last week
- Optimize GEMM with tensorcore step by step☆11Updated 9 months ago
- Manually implemented quantization-aware training☆21Updated last year
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆46Updated last month
- CUDA Matrix Multiplication Optimization☆118Updated 2 months ago
- ResNet Implementation, Training, and Inference Using LibTorch C++ API☆34Updated 3 months ago
- base quantization methods including: QAT, PTQ, per_channel, per_tensor, dorefa, lsq, adaround, omse, Histogram, bias_correction.etc☆38Updated last year
- ☆48Updated 2 years ago
- ☆77Updated last year
- ☆18Updated 3 years ago
- ☆17Updated 5 months ago
- Inference of quantization aware trained networks using TensorRT☆77Updated last year
- ☆18Updated 5 months ago
- ☆134Updated last year
- TiledCUDA is a highly efficient kernel template library designed to elevate CUDA C’s level of abstraction for processing tiles.☆114Updated last week