NVIDIA / TensorRT-Model-OptimizerLinks
A unified library of state-of-the-art model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. It compresses deep learning models for downstream deployment frameworks like TensorRT-LLM or TensorRT to optimize inference speed.
☆1,078Updated 2 weeks ago
Alternatives and similar repositories for TensorRT-Model-Optimizer
Users that are interested in TensorRT-Model-Optimizer are comparing it to the libraries listed below
Sorting:
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper, Ada and Bla…☆2,587Updated this week
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆730Updated 4 months ago
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,461Updated last year
- A pytorch quantization backend for optimum☆977Updated 3 weeks ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆868Updated 10 months ago
- PyTorch native quantization and sparsity for training and inference☆2,219Updated this week
- The Triton TensorRT-LLM Backend☆870Updated last week
- A throughput-oriented high-performance serving framework for LLMs☆856Updated 3 weeks ago
- This repository contains tutorials and examples for Triton Inference Server☆742Updated last week
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆653Updated 3 weeks ago
- [EMNLP 2024 Industry Track] This is the official PyTorch implementation of "LLMC: Benchmarking Large Language Model Quantization with a V…☆522Updated this week
- FlagGems is an operator library for large language models implemented in the Triton Language.☆635Updated this week
- FlashInfer: Kernel Library for LLM Serving☆3,448Updated this week
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆1,690Updated last week
- Advanced Quantization Algorithm for LLMs and VLMs, with support for CPU, Intel GPU, CUDA and HPU. Seamlessly integrated with Torchao, Tra…☆551Updated this week
- Distributed Compiler based on Triton for Parallel Systems☆930Updated this week
- Flash Attention in ~100 lines of CUDA (forward pass only)☆887Updated 7 months ago
- Pipeline Parallelism for PyTorch☆775Updated 11 months ago
- SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX R…☆2,461Updated last week
- cudnn_frontend provides a c++ wrapper for the cudnn backend API and samples on how to use it☆596Updated 2 weeks ago
- Microsoft Automatic Mixed Precision Library☆616Updated 10 months ago
- ONNX Optimizer☆735Updated 2 weeks ago
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels☆1,472Updated this week
- Code for Neurips24 paper: QuaRot, an end-to-end 4-bit inference of large language models.☆410Updated 8 months ago
- Mirage: Automatically Generating Fast GPU Kernels without Programming in Triton/CUDA☆1,629Updated this week
- A parser, editor and profiler tool for ONNX models.☆446Updated last month
- Several optimization methods of half-precision general matrix multiplication (HGEMM) using tensor core with WMMA API and MMA PTX instruct…☆445Updated 10 months ago
- depyf is a tool to help you understand and adapt to PyTorch compiler torch.compile.☆706Updated 3 months ago
- TensorRT Plugin Autogen Tool☆369Updated 2 years ago
- Production ready LLM model compression/quantization toolkit with hw accelerated inference support for both cpu/gpu via HF, vLLM, and SGLa…☆702Updated 2 weeks ago