substratusai / vllm-dockerLinks
☆67Updated 10 months ago
Alternatives and similar repositories for vllm-docker
Users that are interested in vllm-docker are comparing it to the libraries listed below
Sorting:
- Self-host LLMs with vLLM and BentoML☆168Updated 3 weeks ago
- Experimental Code for StructuredRAG: JSON Response Formatting with Large Language Models☆115Updated 10 months ago
- ☆18Updated last year
- Machine Learning Serving focused on GenAI with simplicity as the top priority.☆59Updated last month
- Tutorial to get started with SkyPilot!☆58Updated last year
- Code for evaluating with Flow-Judge-v0.1 - an open-source, lightweight (3.8B) language model optimized for LLM system evaluations. Crafte…☆84Updated last year
- 🕹️ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.☆138Updated last year
- Develop, evaluate and monitor LLM applications at scale☆100Updated last year
- ☆82Updated 3 months ago
- Evaluation of bm42 sparse indexing algorithm☆72Updated last year
- Simple examples using Argilla tools to build AI☆57Updated last year
- High level library for batched embeddings generation, blazingly-fast web-based RAG and quantized indexes processing ⚡☆69Updated 2 months ago
- A collection of all available inference solutions for the LLMs☆94Updated 11 months ago
- ☆51Updated last year
- GPT-4 Level Conversational QA Trained In a Few Hours☆65Updated last year
- ☆198Updated 2 years ago
- Tutorial for building LLM router☆244Updated last year
- Accelerating your LLM training to full speed! Made with ❤️ by ServiceNow Research☆287Updated last week
- Hugging Face Inference Toolkit used to serve transformers, sentence-transformers, and diffusers models.☆90Updated last month
- Large Language Model Hosting Container☆91Updated 4 months ago
- An OpenAI Completions API compatible server for NLP transformers models☆66Updated 2 years ago
- Using LlamaIndex with Ray for productionizing LLM applications☆71Updated 2 years ago
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆77Updated last year
- A simple service that integrates vLLM with Ray Serve for fast and scalable LLM serving.☆78Updated last year
- IBM development fork of https://github.com/huggingface/text-generation-inference☆63Updated 4 months ago
- Deployment a light and full OpenAI API for production with vLLM to support /v1/embeddings with all embeddings models.☆44Updated last year
- ☆101Updated last year
- 🔎 A deep-dive into HyDE for Advanced LLM RAG + 💡 Introducing AutoHyDE, a semi-supervised framework to improve the effectiveness, covera…☆34Updated last year
- A high performance batching router optimises max throughput for text inference workload☆16Updated 2 years ago
- Just a bunch of benchmark logs for different LLMs☆119Updated last year