Several optimization methods of half-precision general matrix vector multiplication (HGEMV) using CUDA core.
☆74Sep 8, 2024Updated last year
Alternatives and similar repositories for cuda_hgemv
Users that are interested in cuda_hgemv are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Several optimization methods of half-precision general matrix multiplication (HGEMM) using tensor core with WMMA API and MMA PTX instruct…☆544Sep 8, 2024Updated last year
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆129Jul 13, 2024Updated last year
- TensorRT-in-Action 是一个 GitHub 代码库,提供了使用 TensorRT 的代码示例,并有对应 Jupyter Notebook。☆15Jun 1, 2023Updated 2 years ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Jun 11, 2025Updated 10 months ago
- ☆20Sep 28, 2024Updated last year
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- ☆120May 16, 2025Updated 11 months ago
- ☆265Jul 11, 2024Updated last year
- A CUDA kernel for NHWC GroupNorm for PyTorch☆23Nov 15, 2024Updated last year
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆96Feb 20, 2026Updated 2 months ago
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- Examples of CUDA implementations by Cutlass CuTe☆272Jul 1, 2025Updated 10 months ago
- 本仓库在OpenVINO推理框架下部署Nanodet检测算法,并重写预处理和后处理部分,具有超高性能!让你在Intel CPU平台上的检测速度起飞! 并基于NNCF和PPQ工具将模型量化(PTQ)至int8精度,推理速度更快!☆16Jun 14, 2023Updated 2 years ago
- CUTLASS and CuTe Examples☆135Nov 30, 2025Updated 5 months ago
- Fast low-bit matmul kernels in Triton☆446Apr 27, 2026Updated last week
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- ☆174Feb 5, 2026Updated 3 months ago
- A practical way of learning Swizzle☆38Feb 3, 2025Updated last year
- ☆87Jan 23, 2025Updated last year
- ☆65Feb 15, 2026Updated 2 months ago
- ☆98Mar 26, 2025Updated last year
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆81Aug 12, 2024Updated last year
- ☆160Dec 26, 2024Updated last year
- ☆45Nov 1, 2025Updated 6 months ago
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆22Updated this week
- End-to-end encrypted email - Proton Mail • AdSpecial offer: 40% Off Yearly / 80% Off First Month. All Proton services are open source and independently audited for security.
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆46Feb 27, 2025Updated last year
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆64Mar 25, 2025Updated last year
- Implement Flash Attention using Cute.☆106Dec 17, 2024Updated last year
- Boosting 4-bit inference kernels with 2:4 Sparsity☆96Sep 4, 2024Updated last year
- ☆33Feb 3, 2025Updated last year
- Sample Codes using NVSHMEM on Multi-GPU☆30Jan 22, 2023Updated 3 years ago
- CUDA Matrix Multiplication Optimization☆269Jul 19, 2024Updated last year
- High Performance FP8 GEMM Kernels for SM89 and later GPUs.☆21Jan 24, 2025Updated last year
- ☆19Aug 23, 2022Updated 3 years ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- ☆99May 31, 2025Updated 11 months ago
- A Easy-to-understand TensorOp Matmul Tutorial☆428Mar 5, 2026Updated 2 months ago
- flash attention tutorial written in python, triton, cuda, cutlass☆508Jan 20, 2026Updated 3 months ago
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆20Aug 3, 2025Updated 9 months ago
- ☆14Nov 3, 2025Updated 6 months ago
- ☆13Jan 7, 2025Updated last year
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆151May 10, 2025Updated 11 months ago