OpenAI compatible API for TensorRT LLM triton backend
☆220Aug 1, 2024Updated last year
Alternatives and similar repositories for openai_trtllm
Users that are interested in openai_trtllm are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- The Triton TensorRT-LLM Backend☆934May 1, 2026Updated last week
- ☆341May 1, 2026Updated last week
- High-level API for tar-based dataset☆12Feb 3, 2024Updated 2 years ago
- 大模型部署实战:TensorRT-LLM, Triton Inference Server, vLLM☆27Feb 26, 2024Updated 2 years ago
- AI Router☆14Aug 1, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- ☆28Nov 6, 2024Updated last year
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizat…☆13,545Updated this week
- The driver for LMCache core to run in vLLM☆64Feb 4, 2025Updated last year
- Triton CLI is an open source command line interface that enables users to create, deploy, and profile models served by the Triton Inferen…☆74Apr 15, 2026Updated 3 weeks ago
- Proxy server for triton gRPC server that inferences embedding model in Rust☆21Aug 10, 2024Updated last year
- FlashInfer: Kernel Library for LLM Serving☆5,580Updated this week
- MERA (Multimodal Evaluation for Russian-language Architectures) is a new open benchmark for the Russian language for evaluating SOTA mode…☆46Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆17Jun 3, 2024Updated last year
- A unified library of SOTA model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. It compresse…☆2,636Updated this week
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆7,836Apr 29, 2026Updated last week
- The DL Streamer Pipeline Zoo is a catalog of optimized media and media analytics pipelines. It includes tools for downloading pipelines a…☆16Aug 20, 2024Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆266Dec 4, 2025Updated 5 months ago
- JAX bindings for the flash-attention3 kernels☆22Jan 2, 2026Updated 4 months ago
- This repository contains tutorials and examples for Triton Inference Server☆832Apr 21, 2026Updated 2 weeks ago
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆4,046Updated this week
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆10,637Updated this week
- RuBLiMP: Russian Benchmark of Linguistic Minimal Pairs☆20Feb 8, 2026Updated 3 months ago
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,110Jun 30, 2025Updated 10 months ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Open Source Text Embedding Models with OpenAI Compatible API☆168Jul 13, 2024Updated last year
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆3,190Updated this week
- A throughput-oriented high-performance serving framework for LLMs☆956Mar 29, 2026Updated last month
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆221Feb 3, 2026Updated 3 months ago
- TensorDock CLI Client☆10Oct 14, 2022Updated 3 years ago
- fast-embeddings-api☆16Nov 23, 2023Updated 2 years ago
- ☆302Apr 30, 2026Updated last week
- This is a fork of SGLang for hip-attention integration. Please refer to hip-attention for detail.☆18Mar 31, 2026Updated last month
- Effective LLM Alignment Toolkit☆153Jun 25, 2025Updated 10 months ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆119Mar 13, 2024Updated 2 years ago
- ggml学习笔记,ggml是一个机器学习的推理框架☆18Mar 24, 2024Updated 2 years ago
- Optimized primitives for collective multi-GPU communication☆10May 8, 2024Updated 2 years ago
- LLMPerf is a library for validating and benchmarking LLMs☆1,113Dec 9, 2024Updated last year
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆40Feb 29, 2024Updated 2 years ago
- bge推理优化相关脚本☆29Jan 23, 2024Updated 2 years ago
- ☆21Feb 27, 2024Updated 2 years ago