High-performance, light-weight C++ LLM and VLM Inference Software for Physical AI
☆302Mar 19, 2026Updated this week
Alternatives and similar repositories for TensorRT-Edge-LLM
Users that are interested in TensorRT-Edge-LLM are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- NVIDIA DLA-SW, the recipes and tools for running deep learning workloads on NVIDIA DLA cores for inference applications.☆228Jun 10, 2024Updated last year
- ☆54Jan 5, 2026Updated 2 months ago
- A composable container for Adaptive ROS 2 Node computations. Select between FPGA, CPU or GPU at run-time.☆12Apr 14, 2022Updated 3 years ago
- Multiple Lidar preprocessor for BEVfusion☆10Aug 25, 2023Updated 2 years ago
- A collection of VLMs papers, blogs, and projects, with a focus on VLMs in Autonomous Driving and related reasoning techniques.☆11Nov 16, 2024Updated last year
- ☆24Oct 10, 2022Updated 3 years ago
- A project demonstrating how to use nvmetamux to run multiple models in parallel.☆112Oct 18, 2024Updated last year
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆81Aug 12, 2024Updated last year
- YOLOv5 on Orin DLA☆221Feb 18, 2024Updated 2 years ago
- cutile kernel examples☆40Feb 6, 2026Updated last month
- A project demonstrating Lidar related AI solutions, including three GPU accelerated Lidar/camera DL networks (PointPillars, CenterPoint, …☆1,767Mar 10, 2026Updated 2 weeks ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆149May 10, 2025Updated 10 months ago
- NVIDIA TensorRT-RTX is an SDK for high-performance AI inference on NVIDIA RTX GPUs. This repository contains Open-Source Software compone…☆84Updated this week
- ☆65Apr 26, 2025Updated 10 months ago
- A unified library of SOTA model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. It compresse…☆2,218Updated this week
- ☆10Dec 21, 2020Updated 5 years ago
- 搜藏的希望的代码片段☆12Jun 6, 2023Updated 2 years ago
- ☆13Mar 26, 2022Updated 3 years ago
- Python scripts performing Open Vocabulary Object Detection using the YOLO-World model in ONNX. And Export the ONNX model for AXera's NPU☆11Aug 11, 2025Updated 7 months ago
- A CUDA kernel for NHWC GroupNorm for PyTorch☆23Nov 15, 2024Updated last year
- TensorRT deploy and PTQ/QAT tools development for FastBEV, total time only need 6.9ms!!!☆296Dec 8, 2023Updated 2 years ago
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆50Oct 20, 2023Updated 2 years ago
- Aerial Detection Toolbox☆11Jan 18, 2023Updated 3 years ago
- Lightweight Python Wrapper for OpenVINO, enabling LLM inference on NPUs☆27Dec 17, 2024Updated last year
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆21Updated this week
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆59Aug 12, 2024Updated last year
- Deploying LLMs offline on the NVIDIA Jetson platform marks the dawn of a new era in embodied intelligence, where devices can function ind…☆107Mar 23, 2024Updated 2 years ago
- ☆55Feb 5, 2026Updated last month
- The Triton backend for TensorRT.☆86Mar 10, 2026Updated last week
- NVIDIA TensorRT Hackathon 2023复赛选题:通义千问Qwen-7B用TensorRT-LLM模型搭建及优化☆43Oct 20, 2023Updated 2 years ago
- ☆32Jul 23, 2024Updated last year
- 本仓库在OpenVINO推理框架下部署Nanodet检测算法,并重写预处理和后处理部分,具有超高性能!让你在Intel CPU平台上的检测速度起飞! 并基于NNCF和PPQ工具将模型量化(PTQ)至int8精度,推理速度更快!☆16Jun 14, 2023Updated 2 years ago
- 🤖FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA EA.☆253Feb 13, 2026Updated last month
- Experimental projects related to TensorRT☆121Updated this week
- NVIDIA DeepStream SDK 8.0 / 7.1 / 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 application for YOLO-Face models☆79Oct 13, 2025Updated 5 months ago
- ☆12Jan 12, 2024Updated 2 years ago
- Try Edge Computing devices from scratch --- NVIDIA Jetson Nano☆23Aug 4, 2021Updated 4 years ago
- ☆23Aug 14, 2024Updated last year
- yolov8pose 瑞芯微 rknn 板端 C++部署。☆37Jan 12, 2024Updated 2 years ago