HunyuanDiT with TensorRT and libtorch
☆18May 22, 2024Updated last year
Alternatives and similar repositories for HunyuanDiT-TensorRT-libtorch
Users that are interested in HunyuanDiT-TensorRT-libtorch are comparing it to the libraries listed below
Sorting:
- A tool convert TensorRT engine/plan to a fake onnx☆41Nov 22, 2022Updated 3 years ago
- ☆23Jan 3, 2024Updated 2 years ago
- ☆20Dec 29, 2023Updated 2 years ago
- 用于学习GOT/Qwen/OnnxLLm☆53Oct 8, 2024Updated last year
- ffmpeg+cuvid+tensorrt+multicamera☆12Dec 31, 2024Updated last year
- Stable Diffusion in TensorRT 8.5+☆15Mar 19, 2023Updated 2 years ago
- A simple neural network inference framework☆25Aug 1, 2023Updated 2 years ago
- 搜藏的希望的代码片段☆13Jun 6, 2023Updated 2 years ago
- 大模型API性能指标比较 - 深入分析TTFT、TPS等关键指标☆20Sep 12, 2024Updated last year
- ☆30Nov 16, 2024Updated last year
- llm deploy project based onnx.☆50Oct 9, 2024Updated last year
- 使用mnn-llm对GOT-OCR2.0进行推理☆14Oct 2, 2024Updated last year
- learn TensorRT from scratch🥰☆18Sep 29, 2024Updated last year
- yolov7-pose end2end TRT实现☆27Sep 8, 2022Updated 3 years ago
- segmentation algorithm yolact use tensorrt deploy☆14May 7, 2022Updated 3 years ago
- 本仓库在OpenVINO推理框架下部署Nanodet检测算法,并重写预处理和后处理部分,具有超高性能!让你在Intel CPU平台上的检测速度起飞! 并基于NNCF和PPQ工具将模型量化(PTQ)至int8精度,推理速度更快!☆16Jun 14, 2023Updated 2 years ago
- c++实现的clip推理,模型有一点点改动,但是不大,改动和导出模型的代码可以在readme里找到,模型文件都在Releases里,包括AX650的模型。新增支持ChineseCLIP☆31Jun 19, 2025Updated 8 months ago
- A faster implementation of OpenCV-CUDA that uses OpenCV objects, and more!☆54Updated this week
- Training LLaMA language model with MMEngine! It supports LoRA fine-tuning!☆41Apr 2, 2023Updated 2 years ago
- a plugin-oriented framework for video structured. 国产程序员请加微信zhzhi78拉群交流。☆18May 28, 2024Updated last year
- ☆22Apr 10, 2024Updated last year
- Awesome code, projects, books, etc. related to CUDA☆31Feb 3, 2026Updated last month
- Large Language Model Onnx Inference Framework☆35Nov 25, 2025Updated 3 months ago
- In our implementation of Qwen-Image-Edit, we employ block causal attention to improve inference speed.☆37Feb 16, 2026Updated 2 weeks ago
- an example of segment-anything infer by ncnn☆124May 5, 2023Updated 2 years ago
- ☆42Nov 29, 2022Updated 3 years ago
- NVIDIA TensorRT Hackathon 2023复赛选题:通义千问Qwen-7B用TensorRT-LLM模型搭建及优化☆43Oct 20, 2023Updated 2 years ago
- ☆26Nov 21, 2024Updated last year
- Examples of AI model running on the board, such as horizon/rockchip and so on.☆21Jul 10, 2023Updated 2 years ago
- ☆14Feb 9, 2026Updated 3 weeks ago
- YOLOv12 TensorRT 端到端模型加速推理和INT8量化实现☆13Mar 5, 2025Updated last year
- Inference deployment of the llama3☆11Apr 21, 2024Updated last year
- DETR tensor去除推理过程无用辅助头+fp16部署再次加速+解决转tensorrt 输出全为0问题的新方法。☆12Jan 9, 2024Updated 2 years ago
- FastSAM 部署rknn C++ 代码☆14May 30, 2024Updated last year
- 🎉My Collections of CUDA Kernels~☆11Jun 25, 2024Updated last year
- Llama3 Streaming Chat Sample☆22Apr 24, 2024Updated last year
- ☆47Mar 27, 2023Updated 2 years ago
- Accelerating SAHI-based inference on YOLO models using TensorRT.☆93Jan 6, 2026Updated 2 months ago
- ☆26Aug 15, 2023Updated 2 years ago