substratusai / vllm-dockerLinks
☆64Updated 7 months ago
Alternatives and similar repositories for vllm-docker
Users that are interested in vllm-docker are comparing it to the libraries listed below
Sorting:
- Self-host LLMs with vLLM and BentoML☆156Updated 2 weeks ago
- Machine Learning Serving focused on GenAI with simplicity as the top priority.☆58Updated last month
- 🕹️ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.☆139Updated last year
- Experimental Code for StructuredRAG: JSON Response Formatting with Large Language Models☆111Updated 7 months ago
- Deployment a light and full OpenAI API for production with vLLM to support /v1/embeddings with all embeddings models.☆44Updated last year
- Using LlamaIndex with Ray for productionizing LLM applications☆71Updated 2 years ago
- Client Code Examples, Use Cases and Benchmarks for Enterprise h2oGPTe RAG-Based GenAI Platform☆91Updated 2 months ago
- A collection of all available inference solutions for the LLMs☆92Updated 8 months ago
- Accelerating your LLM training to full speed! Made with ❤️ by ServiceNow Research☆259Updated this week
- A simple service that integrates vLLM with Ray Serve for fast and scalable LLM serving.☆78Updated last year
- 🚀 Scale your RAG pipeline using Ragswift: A scalable centralized embeddings management platform☆38Updated last year
- Tutorial to get started with SkyPilot!☆57Updated last year
- TitanML Takeoff Server is an optimization, compression and deployment platform that makes state of the art machine learning models access…☆114Updated last year
- The backend behind the LLM-Perf Leaderboard☆11Updated last year
- ☆138Updated 2 months ago
- Elasticsearch integration into LangChain☆65Updated 2 weeks ago
- High level library for batched embeddings generation, blazingly-fast web-based RAG and quantized indexes processing ⚡☆68Updated last year
- A stable, fast and easy-to-use inference library with a focus on a sync-to-async API☆45Updated last year
- Code for evaluating with Flow-Judge-v0.1 - an open-source, lightweight (3.8B) language model optimized for LLM system evaluations. Crafte…☆78Updated last year
- ☆51Updated last year
- ☆79Updated last week
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hub☆160Updated 2 years ago
- Benchmarking the serving capabilities of vLLM☆55Updated last year
- Hugging Face Inference Toolkit used to serve transformers, sentence-transformers, and diffusers models.☆88Updated this week
- IBM development fork of https://github.com/huggingface/text-generation-inference☆62Updated last month
- Tutorial for building LLM router☆235Updated last year
- ☆197Updated last year
- ☆66Updated 5 months ago
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆78Updated last year
- GPT-4 Level Conversational QA Trained In a Few Hours☆65Updated last year