asprenger / ray_vllm_inferenceLinks
A simple service that integrates vLLM with Ray Serve for fast and scalable LLM serving.
☆79Updated last year
Alternatives and similar repositories for ray_vllm_inference
Users that are interested in ray_vllm_inference are comparing it to the libraries listed below
Sorting:
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆130Updated 2 months ago
- vLLM Router☆51Updated last year
- Benchmarking the serving capabilities of vLLM☆56Updated last year
- ☆56Updated last year
- A collection of all available inference solutions for the LLMs☆93Updated 9 months ago
- Comparison of Language Model Inference Engines☆236Updated 11 months ago
- Self-host LLMs with vLLM and BentoML☆161Updated last week
- ☆64Updated 8 months ago
- Benchmark suite for LLMs from Fireworks.ai☆84Updated last week
- ☆317Updated last week
- Deployment a light and full OpenAI API for production with vLLM to support /v1/embeddings with all embeddings models.☆44Updated last year
- OpenAI compatible API for TensorRT LLM triton backend☆218Updated last year
- ☆267Updated last week
- 🕹️ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.☆139Updated last year
- ☆90Updated last year
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs☆730Updated this week
- vLLM adapter for a TGIS-compatible gRPC server.☆45Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆132Updated last year
- Data preparation code for Amber 7B LLM☆93Updated last year
- Accelerating your LLM training to full speed! Made with ❤️ by ServiceNow Research☆262Updated this week
- A pipeline for LLM knowledge distillation☆110Updated 8 months ago
- Common recipes to run vLLM☆245Updated last week
- A high-performance inference system for large language models, designed for production environments.☆486Updated 3 weeks ago
- Dynamic batching library for Deep Learning inference. Tutorials for LLM, GPT scenarios.☆106Updated last year
- A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM☆132Updated last week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆267Updated last year
- ☆51Updated last year
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆327Updated this week
- Multi-Faceted AI Agent and Workflow Autotuning. Automatically optimizes LangChain, LangGraph, DSPy programs for better quality, lower exe…☆266Updated 6 months ago
- The driver for LMCache core to run in vLLM☆58Updated 10 months ago