chenlamei / MobileVit_TensorRT
TensorRT 2022 亚军方案,tensorrt加速mobilevit模型
☆61Updated 2 years ago
Alternatives and similar repositories for MobileVit_TensorRT:
Users that are interested in MobileVit_TensorRT are comparing it to the libraries listed below
- algorithm-cpp projects☆79Updated 2 years ago
- An onnx-based quantitation tool.☆71Updated last year
- yolov5 tensorrt int8量化方法汇总☆66Updated last year
- ☆41Updated last year
- ☆130Updated last year
- This is 8-bit quantization sample for yolov5. Both PTQ, QAT and Partial Quantization have been implemented, and present the results based…☆96Updated 2 years ago
- 该代码与B站上的视频 https://www.bilibili.com/video/BV18L41197Uz/?spm_id_from=333.788&vd_source=eefa4b6e337f16d87d87c2c357db8ca7 相关联。☆64Updated last year
- Using pattern matcher in onnx model to match and replace subgraphs.☆76Updated 11 months ago
- TensorRT 2022复赛方案: 首个基于Transformer的图像重建模型MST++的TensorRT模型推断优化☆138Updated 2 years ago
- Speed up image preprocess with cuda when handle image or tensorrt inference☆57Updated last week
- ☆110Updated 10 months ago
- ☆77Updated last year
- ☆40Updated 2 years ago
- Some tools to operate PaddlePaddle model☆72Updated 2 years ago
- 使用pytorch_quantization对yolov8进行量化☆97Updated last year
- 高效部署:YOLO X, V3, V4, V5, V6, V7, V8, EdgeYOLO TRT推理 ™️ ,前后处理均由CUDA核函数实现 CPP/CUDA🚀☆49Updated last year
- learning-cuda-trt☆107Updated last year
- A large number of cuda/tensorrt cases . 大量案例来学习cuda/tensorrt☆121Updated 2 years ago
- async inference for machine learning model☆26Updated 2 years ago
- A simple tool that can generate TensorRT plugin code quickly.☆224Updated last year
- Yolov5 inference on NVDec hardware decoder☆88Updated 3 years ago
- NVIDIA-阿里2021 TRT比赛 `二等奖` 代码提交 团队:美迪康 AI Lab☆166Updated 2 years ago
- Quick and Self-Contained TensorRT Custom Plugin Implementation and Integration☆50Updated 7 months ago
- nerf☆42Updated 2 years ago
- ☆23Updated last year
- ☆115Updated last year
- ☆49Updated last year
- ☆53Updated last year
- tensorrt int8 量化yolov5 onnx模型☆180Updated 3 years ago
- Base on tensorrt version 8.2.4, compare inference speed for different tensorrt api.☆35Updated last month