NVIDIA / TensorRT-LLM
TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
☆8,681Updated last week
Related projects ⓘ
Alternatives and complementary repositories for TensorRT-LLM
- SGLang is a fast serving framework for large language models and vision language models.☆6,127Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆30,423Updated this week
- Large Language Model Text Generation Inference☆9,122Updated this week
- Fast and memory-efficient exact attention☆14,279Updated this week
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆5,669Updated last month
- Accessible large language models via k-bit quantization for PyTorch.☆6,299Updated this week
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆4,497Updated last month
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,059Updated 5 months ago
- Transformer related optimization, including BERT, GPT☆5,890Updated 7 months ago
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆2,526Updated last month
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆4,669Updated this week
- Go ahead and axolotl questions☆7,930Updated this week
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆7,919Updated 6 months ago
- ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Pl…☆2,138Updated last month
- Train transformer language models with reinforcement learning.☆10,086Updated this week
- High-speed Large Language Model Serving on PCs with Consumer-grade GPUs☆7,965Updated 2 months ago
- A framework for few-shot evaluation of language models.☆6,990Updated this week
- Tools for merging pretrained large language models.☆4,816Updated 2 weeks ago
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆2,613Updated this week
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆3,680Updated this week
- Finetune Llama 3.2, Mistral, Phi, Qwen 2.5 & Gemma LLMs 2-5x faster with 80% less memory☆18,263Updated this week
- PyTorch native finetuning library☆4,336Updated this week
- Python bindings for llama.cpp☆8,141Updated this week
- Ongoing research training transformer models at scale☆10,595Updated this week
- Official inference library for Mistral models☆9,738Updated last week
- Scripts for fine-tuning Meta Llama with composable FSDP & PEFT methods to cover single/multi-node GPUs. Supports default & custom dataset…☆15,222Updated this week
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆8,660Updated this week
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆1,765Updated this week
- Tensor library for machine learning☆11,233Updated this week
- Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs☆2,205Updated this week