hova88 / CUDA-MatMul-PracticeLinks
☆18Updated 2 years ago
Alternatives and similar repositories for CUDA-MatMul-Practice
Users that are interested in CUDA-MatMul-Practice are comparing it to the libraries listed below
Sorting:
- ☆21Updated 4 years ago
- ☆38Updated last year
- study of cutlass☆22Updated last year
- SGEMM optimization with cuda step by step☆21Updated last year
- ☆14Updated 3 months ago
- CUDA 6大并行计算模式 代码与笔记☆61Updated 5 years ago
- ☆49Updated last year
- Standalone Flash Attention v2 kernel without libtorch dependency☆114Updated last year
- ☆19Updated last year
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆56Updated last year
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆78Updated last year
- Several optimization methods of half-precision general matrix vector multiplication (HGEMV) using CUDA core.☆72Updated last year
- ☆60Updated last year
- ☆34Updated last year
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆148Updated 9 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆44Updated 11 months ago
- ☆145Updated last year
- ☆97Updated 4 years ago
- CUDA 8-bit Tensor Core Matrix Multiplication based on m16n16k16 WMMA API☆35Updated 2 years ago
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆20Updated 6 months ago
- Common libraries for PPL projects☆31Updated 11 months ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆96Updated 4 months ago
- 使用 CUDA C++ 实现的 llama 模型推理框架☆64Updated last year
- ☆152Updated last year
- 分层解耦的深度学习推理引擎☆79Updated 11 months ago
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆51Updated 2 years ago
- 一个轻量化的大模型推理框架☆21Updated 8 months ago
- This is a demo how to write a high performance convolution run on apple silicon☆57Updated 4 years ago
- ☆141Updated last year
- An unofficial cuda assembler, for all generations of SASS, hopefully :)☆84Updated 2 years ago