npuichigo / openai_trtllmLinks
OpenAI compatible API for TensorRT LLM triton backend
☆220Updated last year
Alternatives and similar repositories for openai_trtllm
Users that are interested in openai_trtllm are comparing it to the libraries listed below
Sorting:
- ☆328Updated last week
- Comparison of Language Model Inference Engines☆239Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆267Updated last month
- Easy and Efficient Quantization for Transformers☆202Updated 7 months ago
- A high-performance inference system for large language models, designed for production environments.☆491Updated last month
- ☆206Updated 8 months ago
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆327Updated 4 months ago
- ☆56Updated last year
- The Triton TensorRT-LLM Backend☆919Updated this week
- An innovative library for efficient LLM inference via low-bit quantization☆352Updated last year
- 🕹️ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.☆138Updated last year
- Benchmark suite for LLMs from Fireworks.ai☆86Updated 2 weeks ago
- A throughput-oriented high-performance serving framework for LLMs☆943Updated 3 months ago
- Inference server benchmarking tool☆142Updated 3 months ago
- A general 2-8 bits quantization toolbox with GPTQ/AWQ/HQQ/VPTQ, and export to onnx/onnx-runtime easily.☆184Updated 10 months ago
- Triton CLI is an open source command line interface that enables users to create, deploy, and profile models served by the Triton Inferen…☆73Updated 2 weeks ago
- A simple service that integrates vLLM with Ray Serve for fast and scalable LLM serving.☆78Updated last year
- A safetensors extension to efficiently store sparse quantized tensors on disk☆237Updated this week
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs☆819Updated last week
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆992Updated last year
- Dynamic batching library for Deep Learning inference. Tutorials for LLM, GPT scenarios.☆106Updated last year
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆93Updated this week
- ☆125Updated last year
- Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).☆251Updated last year
- ☆278Updated last week
- Module, Model, and Tensor Serialization/Deserialization☆286Updated 5 months ago
- scalable and robust tree-based speculative decoding algorithm☆366Updated last year
- A tool to configure, launch and manage your machine learning experiments.☆215Updated this week
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆379Updated last week
- Official implementation of Half-Quadratic Quantization (HQQ)☆910Updated last month