triton-inference-server / vllm_backendView external linksLinks
☆329Updated this week
Alternatives and similar repositories for vllm_backend
Users that are interested in vllm_backend are comparing it to the libraries listed below
Sorting:
- The Triton TensorRT-LLM Backend☆918Updated this week
- Common source, scripts and utilities for creating Triton backends.☆367Updated this week
- OpenAI compatible API for TensorRT LLM triton backend☆220Aug 1, 2024Updated last year
- This repository contains tutorials and examples for Triton Inference Server☆822Updated this week
- The Triton backend for TensorRT.☆86Updated this week
- Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.☆678Updated this week
- Triton backend that enables pre-process, post-processing and other logic to be implemented in Python.☆667Feb 7, 2026Updated last week
- The Triton backend for the ONNX Runtime.☆173Updated this week
- FlashInfer: Kernel Library for LLM Serving☆4,935Updated this week
- vLLM adapter for a TGIS-compatible gRPC server.☆51Updated this week
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆10,334Feb 6, 2026Updated last week
- ☆206May 5, 2025Updated 9 months ago
- PyTriton is a Flask/FastAPI-like interface that simplifies Triton's deployment in Python environments.☆832Aug 13, 2025Updated 6 months ago
- Easy and Efficient Quantization for Transformers☆205Jan 28, 2026Updated 2 weeks ago
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizat…☆12,867Updated this week
- Serving multiple LoRA finetuned LLM as one☆1,139May 8, 2024Updated last year
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆2,737Updated this week
- OpenVINO backend for Triton.☆37Updated this week
- A unified library of SOTA model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. It compresse…☆1,964Updated this week
- LLMPerf is a library for validating and benchmarking LLMs☆1,084Dec 9, 2024Updated last year
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs☆843Updated this week
- The core library and APIs implementing the Triton Inference Server.☆164Feb 4, 2026Updated last week
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆505Updated this week
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆217Feb 3, 2026Updated last week
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,093Jun 30, 2025Updated 7 months ago
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆7,606Updated this week
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,314May 11, 2025Updated 9 months ago
- ☆97Mar 26, 2025Updated 10 months ago
- Transformer related optimization, including BERT, GPT☆6,392Mar 27, 2024Updated last year
- Large Language Model Text Generation Inference☆10,757Jan 8, 2026Updated last month
- Korean Nested Named Entity Corpus☆20May 13, 2023Updated 2 years ago
- URL downloader supporting checkpointing and continuous checksumming.☆19Nov 29, 2023Updated 2 years ago
- A throughput-oriented high-performance serving framework for LLMs☆945Oct 29, 2025Updated 3 months ago
- RayLLM - LLMs on Ray (Archived). Read README for more info.☆1,263Mar 13, 2025Updated 11 months ago
- CLASP: Contrastive Language-Speech Pretraining for Multilingual Multimodal Information Retrieval☆13Jun 27, 2025Updated 7 months ago
- AI model designed to test the effectiveness in handling external ethical attacks.☆11Updated this week
- Paper Review about Speech Recognition · NLP☆10Mar 25, 2021Updated 4 years ago
- This is project for korean auto spacing☆12Aug 3, 2020Updated 5 years ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,152Feb 7, 2026Updated last week