NVIDIA / TensorRT-Model-OptimizerLinks
A unified library of state-of-the-art model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. It compresses deep learning models for downstream deployment frameworks like TensorRT-LLM or TensorRT to optimize inference speed.
☆1,028Updated last week
Alternatives and similar repositories for TensorRT-Model-Optimizer
Users that are interested in TensorRT-Model-Optimizer are comparing it to the libraries listed below
Sorting:
- A pytorch quantization backend for optimum☆962Updated last week
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆714Updated 4 months ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper, Ada and Bla…☆2,529Updated last week
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆852Updated 10 months ago
- This repository contains tutorials and examples for Triton Inference Server☆732Updated last month
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,438Updated last year
- PyTorch native quantization and sparsity for training and inference☆2,168Updated this week
- The Triton TensorRT-LLM Backend☆859Updated this week
- A throughput-oriented high-performance serving framework for LLMs☆835Updated last month
- [EMNLP 2024 Industry Track] This is the official PyTorch implementation of "LLMC: Benchmarking Large Language Model Quantization with a V…☆510Updated this week
- FlagGems is an operator library for large language models implemented in the Triton Language.☆617Updated this week
- Advanced Quantization Algorithm for LLMs and VLMs, with support for CPU, Intel GPU, CUDA and HPU. Seamlessly integrated with Torchao, Tra…☆526Updated this week
- SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX R…☆2,449Updated this week
- FlashInfer: Kernel Library for LLM Serving☆3,306Updated this week
- A parser, editor and profiler tool for ONNX models.☆442Updated last month
- Pipeline Parallelism for PyTorch☆769Updated 10 months ago
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆643Updated this week
- Several optimization methods of half-precision general matrix multiplication (HGEMM) using tensor core with WMMA API and MMA PTX instruct…☆438Updated 10 months ago
- cudnn_frontend provides a c++ wrapper for the cudnn backend API and samples on how to use it☆591Updated this week
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆1,616Updated this week
- Distributed Compiler based on Triton for Parallel Systems☆870Updated last week
- ONNX Optimizer☆727Updated this week
- TensorRT Plugin Autogen Tool☆369Updated 2 years ago
- Production ready LLM model compression/quantization toolkit with hw accelerated inference support for both cpu/gpu via HF, vLLM, and SGLa…☆649Updated last week
- Flash Attention in ~100 lines of CUDA (forward pass only)☆859Updated 6 months ago
- depyf is a tool to help you understand and adapt to PyTorch compiler torch.compile.☆698Updated 2 months ago
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆479Updated last month
- Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3.☆1,371Updated last week
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,002Updated this week
- Neural Network Compression Framework for enhanced OpenVINO™ inference☆1,055Updated this week