substratusai / vllm-docker
β54Updated 2 months ago
Alternatives and similar repositories for vllm-docker:
Users that are interested in vllm-docker are comparing it to the libraries listed below
- Self-host LLMs with vLLM and BentoMLβ94Updated this week
- π Scale your RAG pipeline using Ragswift: A scalable centralized embeddings management platformβ37Updated last year
- experiments with inference on llamaβ104Updated 9 months ago
- A simple service that integrates vLLM with Ray Serve for fast and scalable LLM serving.β65Updated 11 months ago
- Machine Learning Serving focused on GenAI with simplicity as the top priority.β58Updated 2 months ago
- Benchmark suite for LLMs from Fireworks.aiβ69Updated last month
- A stable, fast and easy-to-use inference library with a focus on a sync-to-async APIβ45Updated 5 months ago
- Deployment a light and full OpenAI API for production with vLLM to support /v1/embeddings with all embeddings models.β41Updated 8 months ago
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needsβ224Updated this week
- IBM development fork of https://github.com/huggingface/text-generation-inferenceβ60Updated 3 months ago
- β53Updated 9 months ago
- πΉοΈ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.β136Updated 7 months ago
- The backend behind the LLM-Perf Leaderboardβ10Updated 10 months ago
- Using LlamaIndex with Ray for productionizing LLM applicationsβ71Updated last year
- Experimental Code for StructuredRAG: JSON Response Formatting with Large Language Modelsβ104Updated 3 months ago
- Embed anything.β29Updated 9 months ago
- An OpenAI Completions API compatible server for NLP transformers modelsβ64Updated last year
- High level library for batched embeddings generation, blazingly-fast web-based RAG and quantized indexes processing β‘β67Updated 4 months ago
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMsβ87Updated this week
- Official homepage for "Self-Harmonized Chain of Thought" (NAACL 2025)β91Updated 2 months ago
- π A deep-dive into HyDE for Advanced LLM RAG + π‘ Introducing AutoHyDE, a semi-supervised framework to improve the effectiveness, coveraβ¦β32Updated 11 months ago
- Client Code Examples, Use Cases and Benchmarks for Enterprise h2oGPTe RAG-Based GenAI Platformβ83Updated last week
- β20Updated last month
- Python client library for improving your LLM app accuracyβ97Updated last month
- β16Updated 9 months ago
- β99Updated 6 months ago
- A framework for evaluating function calls made by LLMsβ37Updated 8 months ago
- Vector Database with support for late interaction and token level embeddings.