coderonion / cuda-beginner-course-cpp-versionLinks
bilibili视频【CUDA 12.x 并行编程入门(C++版)】配套代码
☆29Updated last year
Alternatives and similar repositories for cuda-beginner-course-cpp-version
Users that are interested in cuda-beginner-course-cpp-version are comparing it to the libraries listed below
Sorting:
- A simple neural network inference framework☆25Updated 2 years ago
- CUDA 6大并行计算模式 代码与笔记☆60Updated 5 years ago
- 大规模并行处理器编程实战 第二版答案☆33Updated 3 years ago
- EasyNN是一个面向教学而开发的神经网络推理框架,旨在让大家0基础也能自主完成推理框架编写!☆32Updated last year
- A large number of cuda/tensorrt cases . 大量案例来学习cuda/tensorrt☆150Updated 3 years ago
- 使用 CUDA C++ 实现的 llama 模型推理框架☆62Updated 10 months ago
- Awesome code, projects, books, etc. related to CUDA☆24Updated last month
- A one-page-only CGraph-API-liked DAG project.☆25Updated 7 months ago
- NVIDIA TensorRT Hackathon 2023复赛选题:通义千问Qwen-7B用TensorRT-LLM模型搭建及优化☆42Updated last year
- Flash Attention in ~100 lines of CUDA (forward pass only)☆10Updated last year
- 该代码与B站上的视频 https://www.bilibili.com/video/BV18L41197Uz/?spm_id_from=333.788&vd_source=eefa4b6e337f16d87d87c2c357db8ca7 相关联。☆69Updated last year
- 高效部署:YOLO X, V3, V4, V5, V6, V7, V8, EdgeYOLO TRT推理 ™️ ,前后处理均由CUDA核函数实现 CPP/CUDA🚀☆50Updated 2 years ago
- 分层解耦的深度学习推理引擎☆75Updated 7 months ago
- ☆10Updated last year
- A light llama-like llm inference framework based on the triton kernel.☆152Updated last month
- TensorRT encapsulation, learn, rewrite, practice.☆29Updated 2 years ago
- SGEMM optimization with cuda step by step☆20Updated last year
- ☆30Updated 10 months ago
- TensorRT 2022 亚军方案,tensorrt加速mobilevit模型☆68Updated 3 years ago
- An onnx-based quantitation tool.☆71Updated last year
- b站上的课程☆75Updated 2 years ago
- ☆35Updated 4 months ago
- Speed up image preprocess with cuda when handle image or tensorrt inference☆77Updated last month
- Quantize yolov5 using pytorch_quantization.🚀🚀🚀☆14Updated last year
- ☆47Updated 2 years ago
- PyTorch Quantization Aware Training(QAT,量化感知训练)☆36Updated last year
- llama 2 Inference☆41Updated last year
- YOLOv12 TensorRT 端到端模型加速推理和INT8量化实现☆13Updated 6 months ago
- autoTVM神经网络推理代码优化搜索演示,基于tvm编译开源模型centerface,并使用autoTVM搜索最优推理代码, 最终部署编译为c++代码,演示平台是cuda,可以是其他平台,例如树莓派,安卓手机,苹果手机.Thi is a demonstration of …☆28Updated 4 years ago
- 使用 Rust 语言重新实现 https://github.com/zjhellofss/KuiperInfer 和 https://github.com/zjhellofss/kuiperdatawhale 中的深度学习推理框架。☆16Updated last year