substratusai / vllm-docker
☆58Updated last month
Alternatives and similar repositories for vllm-docker:
Users that are interested in vllm-docker are comparing it to the libraries listed below
- ☆18Updated 8 months ago
- Self-host LLMs with vLLM and BentoML☆107Updated this week
- A simple service that integrates vLLM with Ray Serve for fast and scalable LLM serving.☆65Updated last year
- Deployment a light and full OpenAI API for production with vLLM to support /v1/embeddings with all embeddings models.☆42Updated 9 months ago
- Machine Learning Serving focused on GenAI with simplicity as the top priority.☆58Updated 3 weeks ago
- 🚀 Scale your RAG pipeline using Ragswift: A scalable centralized embeddings management platform☆38Updated last year
- 🕹️ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.☆136Updated 9 months ago
- ☆53Updated 11 months ago
- ☆66Updated 11 months ago
- 🔎 A deep-dive into HyDE for Advanced LLM RAG + 💡 Introducing AutoHyDE, a semi-supervised framework to improve the effectiveness, covera…☆32Updated last year
- The backend behind the LLM-Perf Leaderboard☆10Updated last year
- Using LlamaIndex with Ray for productionizing LLM applications☆71Updated last year
- ☆16Updated 11 months ago
- A guidance compatibility layer for llama-cpp-python☆34Updated last year
- High level library for batched embeddings generation, blazingly-fast web-based RAG and quantized indexes processing ⚡☆66Updated 6 months ago
- Testing speed and accuracy of RAG with, and without Cross Encoder Reranker.☆48Updated last year
- ☆101Updated 8 months ago
- Simple examples using Argilla tools to build AI☆52Updated 5 months ago
- IBM development fork of https://github.com/huggingface/text-generation-inference☆60Updated 4 months ago
- An OpenAI Completions API compatible server for NLP transformers models☆65Updated last year
- A stable, fast and easy-to-use inference library with a focus on a sync-to-async API☆45Updated 7 months ago
- Inference server benchmarking tool☆56Updated last week
- ☆74Updated 3 months ago
- Code for evaluating with Flow-Judge-v0.1 - an open-source, lightweight (3.8B) language model optimized for LLM system evaluations. Crafte…☆67Updated 6 months ago
- Benchmark suite for LLMs from Fireworks.ai☆70Updated 2 months ago
- Client Code Examples, Use Cases and Benchmarks for Enterprise h2oGPTe RAG-Based GenAI Platform☆87Updated last week
- Data preparation code for Amber 7B LLM☆89Updated 11 months ago
- A high performance batching router optimises max throughput for text inference workload☆16Updated last year
- Data preparation code for CrystalCoder 7B LLM☆44Updated 11 months ago
- experiments with inference on llama☆104Updated 10 months ago