OpenAI compatible API for TensorRT LLM triton backend
☆219Aug 1, 2024Updated last year
Alternatives and similar repositories for openai_trtllm
Users that are interested in openai_trtllm are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- The Triton TensorRT-LLM Backend☆931Apr 8, 2026Updated last week
- ☆338Updated this week
- High-level API for tar-based dataset☆12Feb 3, 2024Updated 2 years ago
- AI Router☆14Aug 1, 2024Updated last year
- ☆28Nov 6, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizat…☆13,354Updated this week
- The driver for LMCache core to run in vLLM☆64Feb 4, 2025Updated last year
- Triton CLI is an open source command line interface that enables users to create, deploy, and profile models served by the Triton Inferen…☆74Apr 8, 2026Updated last week
- Proxy server for triton gRPC server that inferences embedding model in Rust☆21Aug 10, 2024Updated last year
- FlashInfer: Kernel Library for LLM Serving☆5,372Apr 11, 2026Updated last week
- MERA (Multimodal Evaluation for Russian-language Architectures) is a new open benchmark for the Russian language for evaluating SOTA mode…☆46Updated this week
- A unified library of SOTA model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. It compresse…☆2,436Apr 12, 2026Updated last week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆17Jun 3, 2024Updated last year
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆7,775Updated this week
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- The DL Streamer Pipeline Zoo is a catalog of optimized media and media analytics pipelines. It includes tools for downloading pipelines a…☆16Aug 20, 2024Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆266Dec 4, 2025Updated 4 months ago
- JAX bindings for the flash-attention3 kernels☆22Jan 2, 2026Updated 3 months ago
- This repository contains tutorials and examples for Triton Inference Server☆829Updated this week
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆4,016Updated this week
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆10,573Updated this week
- ☆621Jul 31, 2024Updated last year
- RuBLiMP: Russian Benchmark of Linguistic Minimal Pairs☆20Feb 8, 2026Updated 2 months ago
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,107Jun 30, 2025Updated 9 months ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Open Source Text Embedding Models with OpenAI Compatible API☆167Jul 13, 2024Updated last year
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆3,097Updated this week
- A throughput-oriented high-performance serving framework for LLMs☆952Mar 29, 2026Updated 3 weeks ago
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆221Feb 3, 2026Updated 2 months ago
- ☆297Mar 19, 2026Updated last month
- fast-embeddings-api☆16Nov 23, 2023Updated 2 years ago
- An easy-to-use package for implementing SmoothQuant for LLMs☆111Apr 7, 2025Updated last year
- Effective LLM Alignment Toolkit☆153Jun 25, 2025Updated 9 months ago
- This is a fork of SGLang for hip-attention integration. Please refer to hip-attention for detail.☆18Mar 31, 2026Updated 2 weeks ago
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- ggml学习笔记,ggml是一个机器学习的推理框架☆18Mar 24, 2024Updated 2 years ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆118Mar 13, 2024Updated 2 years ago
- LLMPerf is a library for validating and benchmarking LLMs☆1,103Dec 9, 2024Updated last year
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆40Feb 29, 2024Updated 2 years ago
- bge推理优化相关脚本☆29Jan 23, 2024Updated 2 years ago
- ☆21Feb 27, 2024Updated 2 years ago
- Integrating SSE with NVIDIA Triton Inference Server using a Python backend and Zephyr model. There is very less documentation how to use …☆10May 29, 2024Updated last year