CactusQ / TensorRT-LLM-TutorialLinks
Getting started with TensorRT-LLM using BLOOM as a case study
☆23Updated last year
Alternatives and similar repositories for TensorRT-LLM-Tutorial
Users that are interested in TensorRT-LLM-Tutorial are comparing it to the libraries listed below
Sorting:
- ☆319Updated last week
- A collection of all available inference solutions for the LLMs☆93Updated 9 months ago
- Integrating SSE with NVIDIA Triton Inference Server using a Python backend and Zephyr model. There is very less documentation how to use …☆10Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆267Updated 2 weeks ago
- A family of compressed models obtained via pruning and knowledge distillation☆361Updated last month
- Easy and Efficient Quantization for Transformers☆203Updated 5 months ago
- Notes on quantization in neural networks☆113Updated 2 years ago
- OpenAI compatible API for TensorRT LLM triton backend☆218Updated last year
- This reference can be used with any existing OpenAI integrated apps to run with TRT-LLM inference locally on GeForce GPU on Windows inste…☆126Updated last year
- Efficient LLM Inference over Long Sequences☆393Updated 5 months ago
- ☆241Updated 2 months ago
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆323Updated 2 months ago
- Unofficial implementation of https://arxiv.org/pdf/2407.14679☆52Updated last year
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆349Updated 7 months ago
- A repository dedicated to evaluating the performance of quantizied LLaMA3 using various quantization methods..☆197Updated 11 months ago
- This repository contains tutorials and examples for Triton Inference Server☆814Updated last week
- A general 2-8 bits quantization toolbox with GPTQ/AWQ/HQQ/VPTQ, and export to onnx/onnx-runtime easily.☆184Updated 8 months ago
- ☆125Updated last week
- ☆227Updated 11 months ago
- Comparison of Language Model Inference Engines☆238Updated last year
- The Triton TensorRT-LLM Backend☆909Updated last week
- KV cache compression for high-throughput LLM inference☆148Updated 10 months ago
- ☆205Updated 7 months ago
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs☆755Updated this week
- vLLM Router☆51Updated last year
- Code repo for the paper "LLM-QAT Data-Free Quantization Aware Training for Large Language Models"☆323Updated 9 months ago
- [ACL 2025 Main] EfficientQAT: Efficient Quantization-Aware Training for Large Language Models☆317Updated 3 weeks ago
- Code for HyperSeg and HyperSum☆16Updated 5 months ago
- Training-free Post-training Efficient Sub-quadratic Complexity Attention. Implemented with OpenAI Triton.☆148Updated last month
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆74Updated 2 years ago