Phoenix8215 / build_neural_network_from_scratch_CPP
Created a simple neural network using C++17 standard and the Eigen library that supports both forward and backward propagation.
☆9Updated 8 months ago
Alternatives and similar repositories for build_neural_network_from_scratch_CPP:
Users that are interested in build_neural_network_from_scratch_CPP are comparing it to the libraries listed below
- This is a repository to practice multi-thread programming in C++☆22Updated last year
- learn TensorRT from scratch🥰☆13Updated 6 months ago
- Quick and Self-Contained TensorRT Custom Plugin Implementation and Integration☆54Updated 9 months ago
- 本仓库在OpenVINO推理框架下部署Nanodet检测算法,并重写预处理和后处理部分,具有超高性能!让你在Intel CPU平台上的检测速度起飞! 并基于NNCF和PPQ工具将模型量化(PTQ)至int8精度,推理速度更快!☆15Updated last year
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆46Updated last year
- A unified and extensible pipeline for deep learning model inference with C++. Now support yolov8, yolov9, clip, and nanosam. More models …☆11Updated 11 months ago
- Flash Attention in ~100 lines of CUDA (forward pass only)☆10Updated 9 months ago
- ☆24Updated last year
- 使用 CUDA C++ 实现的 llama 模型推理框架☆48Updated 4 months ago
- CUDA 8-bit Tensor Core Matrix Multiplication based on m16n16k16 WMMA API☆28Updated last year
- b站上的课程☆72Updated last year
- Awesome code, projects, books, etc. related to CUDA☆16Updated this week
- async inference for machine learning model☆26Updated 2 years ago
- 彻底弄懂BP反向传播,15行代码,C++实现也简单,MNIST分类98.29%精度☆34Updated 2 years ago
- CUDA 6大并行计算模式 代码与笔记☆60Updated 4 years ago
- An onnx-based quantitation tool.☆71Updated last year
- For 2022 Nvidia Hackathon☆20Updated 2 years ago
- TensorRT简明教程☆26Updated 3 years ago
- NVIDIA TensorRT Hackathon 2023复赛选题:通义千问Qwen-7B用TensorRT-LLM模型搭建及优化☆41Updated last year
- A simple neural network inference framework☆24Updated last year
- TensorRT-in-Action 是一个 GitHub 代码库,提供了使用 TensorRT 的代码示例,并有对应 Jupyter Notebook。☆15Updated last year
- Llama3 Streaming Chat Sample☆22Updated 11 months ago
- A light llama-like llm inference framework based on the triton kernel.☆100Updated 3 weeks ago
- TensorRT encapsulation, learn, rewrite, practice.☆28Updated 2 years ago
- 大模型部署实战:TensorRT-LLM, Triton Inference Server, vLLM☆26Updated last year
- ☆23Updated 2 years ago
- ☆13Updated last year
- Quantize yolov5 using pytorch_quantization.🚀🚀🚀☆14Updated last year
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆17Updated 6 months ago
- EasyNN是一个面向教学而开发的神经网络推理框架,旨在让大家0基础也能自主完成推理框架编写!☆26Updated 7 months ago