NVIDIA / TensorRT-Model-Optimizer

nvidia-modelopt is a unified library of state-of-the-art model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. It compresses deep learning models for downstream deployment frameworks like TensorRT-LLM or TensorRT to optimize inference speed.
860Updated 2 weeks ago

Alternatives and similar repositories for TensorRT-Model-Optimizer:

Users that are interested in TensorRT-Model-Optimizer are comparing it to the libraries listed below