substratusai / vllm-dockerLinks
☆64Updated 8 months ago
Alternatives and similar repositories for vllm-docker
Users that are interested in vllm-docker are comparing it to the libraries listed below
Sorting:
- Self-host LLMs with vLLM and BentoML☆161Updated last week
- IBM development fork of https://github.com/huggingface/text-generation-inference☆62Updated 2 months ago
- 🕹️ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.☆139Updated last year
- ☆18Updated last year
- Experimental Code for StructuredRAG: JSON Response Formatting with Large Language Models☆115Updated 7 months ago
- ☆198Updated last year
- A simple service that integrates vLLM with Ray Serve for fast and scalable LLM serving.☆78Updated last year
- Tutorial to get started with SkyPilot!☆58Updated last year
- Machine Learning Serving focused on GenAI with simplicity as the top priority.☆59Updated last month
- Tutorial for building LLM router☆236Updated last year
- 🚀 Scale your RAG pipeline using Ragswift: A scalable centralized embeddings management platform☆38Updated last year
- ☆51Updated last year
- Deployment a light and full OpenAI API for production with vLLM to support /v1/embeddings with all embeddings models.☆44Updated last year
- The backend behind the LLM-Perf Leaderboard☆11Updated last year
- 🔎 A deep-dive into HyDE for Advanced LLM RAG + 💡 Introducing AutoHyDE, a semi-supervised framework to improve the effectiveness, covera…☆33Updated last year
- Evaluation of bm42 sparse indexing algorithm☆72Updated last year
- Using LlamaIndex with Ray for productionizing LLM applications☆71Updated 2 years ago
- ☆81Updated 3 weeks ago
- A stable, fast and easy-to-use inference library with a focus on a sync-to-async API☆45Updated last year
- A collection of all available inference solutions for the LLMs☆93Updated 9 months ago
- Hugging Face Inference Toolkit used to serve transformers, sentence-transformers, and diffusers models.☆88Updated 2 weeks ago
- GPT-4 Level Conversational QA Trained In a Few Hours☆66Updated last year
- Code for evaluating with Flow-Judge-v0.1 - an open-source, lightweight (3.8B) language model optimized for LLM system evaluations. Crafte…☆78Updated last year
- TitanML Takeoff Server is an optimization, compression and deployment platform that makes state of the art machine learning models access…☆114Updated last year
- ☆138Updated 3 months ago
- Simple examples using Argilla tools to build AI☆56Updated last year
- Develop, evaluate and monitor LLM applications at scale☆98Updated last year
- Python client library for improving your LLM app accuracy☆97Updated 9 months ago
- Data preparation code for Amber 7B LLM☆93Updated last year
- experiments with inference on llama☆103Updated last year