triton-inference-server / stateful_backendLinks
Triton backend for managing the model state tensors automatically in sequence batcher
☆18Updated last year
Alternatives and similar repositories for stateful_backend
Users that are interested in stateful_backend are comparing it to the libraries listed below
Sorting:
- The Triton backend for the ONNX Runtime.☆162Updated 2 weeks ago
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆211Updated 5 months ago
- The Triton backend for TensorRT.☆78Updated 2 weeks ago
- The Triton backend for the PyTorch TorchScript models.☆159Updated 2 weeks ago
- Dynamic batching library for Deep Learning inference. Tutorials for LLM, GPT scenarios.☆103Updated last year
- ☆68Updated 2 years ago
- The Triton backend for TensorFlow.☆53Updated 3 months ago
- Triton CLI is an open source command line interface that enables users to create, deploy, and profile models served by the Triton Inferen…☆67Updated 2 weeks ago
- Common source, scripts and utilities shared across all Triton repositories.☆76Updated 2 weeks ago
- TRITONCACHE implementation of a Redis cache☆15Updated 2 weeks ago
- experiments with inference on llama☆104Updated last year
- ☆15Updated 2 weeks ago
- 🕹️ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.☆138Updated last year
- The core library and APIs implementing the Triton Inference Server.☆150Updated last week
- IBM development fork of https://github.com/huggingface/text-generation-inference☆61Updated last week
- XTR: Rethinking the Role of Token Retrieval in Multi-Vector Retrieval☆58Updated last year
- Plugin for deploying MLflow models to TorchServe☆110Updated 2 years ago
- Implementation of "Efficient Multi-vector Dense Retrieval with Bit Vectors", ECIR 2024☆65Updated 11 months ago
- Tutorial on how to convert machine learned models into ONNX☆16Updated 2 years ago
- A Python wrapper around HuggingFace's TGI (text-generation-inference) and TEI (text-embedding-inference) servers.☆33Updated last week
- Benchmark suite for LLMs from Fireworks.ai☆83Updated this week
- OpenAI compatible API for TensorRT LLM triton backend☆215Updated last year
- Faster Learned Sparse Retrieval with Block-Max Pruning. ACM SIGIR 2024.☆31Updated this week
- Common source, scripts and utilities for creating Triton backends.☆347Updated 2 weeks ago
- ☆25Updated this week
- Make triton easier☆47Updated last year
- minimal pytorch implementation of bm25 (with sparse tensors)☆104Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆266Updated 11 months ago
- ☆296Updated last week
- 🛠️ Tools for Transformers compression using PyTorch Lightning ⚡☆85Updated 10 months ago