ColfaxResearch / cfx-article-src
☆47Updated 2 weeks ago
Related projects ⓘ
Alternatives and complementary repositories for cfx-article-src
- TiledCUDA is a highly efficient kernel template library designed to elevate CUDA C’s level of abstraction for processing tiles.☆148Updated this week
- Several optimization methods of half-precision general matrix vector multiplication (HGEMV) using CUDA core.☆48Updated 2 months ago
- ☆162Updated 3 months ago
- ☆78Updated 6 months ago
- ☆78Updated 8 months ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆85Updated 8 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆98Updated last month
- Examples of CUDA implementations by Cutlass CuTe☆82Updated last week
- ☆79Updated 2 months ago
- play gemm with tvm☆84Updated last year
- Benchmark code for the "Online normalizer calculation for softmax" paper☆59Updated 6 years ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆87Updated 3 months ago
- An extention of TVMScript to write simple and high performance GPU kernels with tensorcore.☆49Updated 3 months ago
- llama INT4 cuda inference with AWQ☆47Updated 4 months ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆51Updated 2 months ago
- CUDA Matrix Multiplication Optimization☆139Updated 3 months ago
- ☆136Updated this week
- An extension library of WMMA API (Tensor Core API)☆82Updated 3 months ago
- ☆79Updated last year
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆26Updated 2 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆51Updated last week
- A Easy-to-understand TensorOp Matmul Tutorial☆287Updated last month
- Matrix Multiply-Accumulate with CUDA and WMMA( Tensor Core)☆114Updated 4 years ago
- ☆108Updated 2 years ago
- Experimental projects related to TensorRT☆77Updated this week
- A fast communication-overlapping library for tensor parallelism on GPUs.☆217Updated last week
- ☆43Updated this week
- Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.☆81Updated last year
- Dissecting NVIDIA GPU Architecture☆82Updated 2 years ago
- ☆35Updated 2 years ago