Phoenix8215 / build_neural_network_from_scratch_CPPLinks
Created a simple neural network using C++17 standard and the Eigen library that supports both forward and backward propagation.
☆9Updated 11 months ago
Alternatives and similar repositories for build_neural_network_from_scratch_CPP
Users that are interested in build_neural_network_from_scratch_CPP are comparing it to the libraries listed below
Sorting:
- learn TensorRT from scratch🥰☆15Updated 9 months ago
- This is a repository to practice multi-thread programming in C++☆24Updated last year
- Quick and Self-Contained TensorRT Custom Plugin Implementation and Integration☆65Updated last month
- TensorRT-in-Action 是一个 GitHub 代码库,提供了使用 TensorRT 的代码示例,并有对应 Jupyter Notebook。☆16Updated 2 years ago
- A large number of cuda/tensorrt cases . 大量案例来学习cuda/tensorrt☆136Updated 2 years ago
- Awesome code, projects, books, etc. related to CUDA☆19Updated this week
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆49Updated last year
- 本仓库在OpenVINO推理框架下部署Nanodet检测算法,并重写预处理和后处理部分,具有超高性能!让你在Intel CPU平台上的检测速度起飞! 并基于NNCF和PPQ工具将模型量化(PTQ)至int8精度,推理速度更快!☆15Updated 2 years ago
- Flash Attention in ~100 lines of CUDA (forward pass only)☆10Updated last year
- Llama3 Streaming Chat Sample☆22Updated last year
- A unified and extensible pipeline for deep learning model inference with C++. Now support yolov8, yolov9, clip, and nanosam. More models …☆12Updated last year
- CUDA 6大并行计算模式 代码与笔记☆60Updated 4 years ago
- An onnx-based quantitation tool.☆71Updated last year
- 彻底弄懂BP反向传播,15行代码,C++实现也简单,MNIST分类98.29%精度☆36Updated 3 years ago
- Deep insight tensorrt, including but not limited to qat, ptq, plugin, triton_inference, cuda☆18Updated last month
- 一个轻量化的大模型推理框架☆20Updated last month
- ☆47Updated 2 years ago
- 该代码与B站上的视频 https://www.bilibili.com/video/BV18L41197Uz/?spm_id_from=333.788&vd_source=eefa4b6e337f16d87d87c2c357db8ca7 相关联。☆69Updated last year
- 搜藏的希望的代码片段☆13Updated 2 years ago
- A simple neural network inference framework☆26Updated last year
- 使用 CUDA C++ 实现的 llama 模型推理框架☆58Updated 8 months ago
- A light llama-like llm inference framework based on the triton kernel.☆134Updated this week
- ☆30Updated 8 months ago
- create your own llm inference server from scratch☆12Updated 7 months ago
- Speed up image preprocess with cuda when handle image or tensorrt inference☆72Updated last week
- ☆26Updated last year
- YOLOv5 on Orin DLA☆205Updated last year
- b站上的课程☆75Updated last year
- C++ TensorRT Implementation of NanoSAM☆39Updated last year
- NVIDIA TensorRT Hackathon 2023复赛选题:通义千问Qwen-7B用TensorRT-LLM模型搭建及优化☆42Updated last year