hova88 / CUDA-MatMul-Practice
☆15Updated last year
Alternatives and similar repositories for CUDA-MatMul-Practice:
Users that are interested in CUDA-MatMul-Practice are comparing it to the libraries listed below
- study of cutlass☆21Updated 4 months ago
- CUDA 6大并行计算模式 代码与笔记☆60Updated 4 years ago
- ☆19Updated 4 years ago
- ☆36Updated 5 months ago
- ☆17Updated 11 months ago
- NVIDIA TensorRT Hackathon 2023复赛选题:通义千问Qwen-7B用TensorRT-LLM模型搭建及优化☆41Updated last year
- ☆29Updated 11 months ago
- Common libraries for PPL projects☆29Updated 2 weeks ago
- ☆16Updated last year
- CVFusion is an open-source deep learning compiler to fuse the OpenCV operators.☆29Updated 2 years ago
- CUDA 8-bit Tensor Core Matrix Multiplication based on m16n16k16 WMMA API☆28Updated last year
- Standalone Flash Attention v2 kernel without libtorch dependency☆106Updated 6 months ago
- ☆11Updated last year
- Optimize softmax in triton in many cases☆20Updated 6 months ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆89Updated 3 weeks ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆60Updated 7 months ago
- EasyNN是一个面向教学而开发的神经网络推理框架,旨在让大家0基础也能自主完成推理框架编写!☆26Updated 6 months ago
- 使用 CUDA C++ 实现的 llama 模型推理框架☆48Updated 4 months ago
- OneFlow->ONNX☆42Updated last year
- CPU Memory Compiler and Parallel programing☆25Updated 4 months ago
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆17Updated 5 months ago
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆46Updated last year
- 大规模并行处理器编程实战 第二版答案☆31Updated 2 years ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆35Updated 3 weeks ago
- 分层解耦的深度学习推理引擎☆72Updated last month
- ☆58Updated 4 months ago
- Several optimization methods of half-precision general matrix vector multiplication (HGEMV) using CUDA core.☆57Updated 6 months ago
- ☆18Updated last year