triton-inference-server / vllm_backendLinks
☆328Updated last week
Alternatives and similar repositories for vllm_backend
Users that are interested in vllm_backend are comparing it to the libraries listed below
Sorting:
- OpenAI compatible API for TensorRT LLM triton backend☆220Updated last year
- Easy and Efficient Quantization for Transformers☆204Updated last week
- The Triton TensorRT-LLM Backend☆919Updated this week
- ☆206Updated 8 months ago
- ☆133Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆267Updated last month
- Common source, scripts and utilities for creating Triton backends.☆366Updated 3 weeks ago
- The Triton backend for the ONNX Runtime.☆172Updated 2 weeks ago
- Comparison of Language Model Inference Engines☆239Updated last year
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆327Updated 4 months ago
- ☆125Updated last year
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs☆830Updated this week
- A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM☆220Updated this week
- This repository contains tutorials and examples for Triton Inference Server☆815Updated this week
- A throughput-oriented high-performance serving framework for LLMs☆943Updated 3 months ago
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆503Updated this week
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆384Updated this week
- A high-performance inference system for large language models, designed for production environments.☆491Updated last month
- Dynamic batching library for Deep Learning inference. Tutorials for LLM, GPT scenarios.☆106Updated last year
- ☆56Updated last year
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,005Updated last year
- Efficient LLM Inference over Long Sequences☆394Updated 7 months ago
- vLLM Router☆54Updated last year
- Triton CLI is an open source command line interface that enables users to create, deploy, and profile models served by the Triton Inferen…☆73Updated 3 weeks ago
- An innovative library for efficient LLM inference via low-bit quantization☆352Updated last year
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆216Updated this week
- ☆61Updated last year
- LLMPerf is a library for validating and benchmarking LLMs☆1,081Updated last year
- The Triton backend for TensorRT.☆84Updated last week
- Benchmark suite for LLMs from Fireworks.ai☆89Updated this week