substratusai / vllm-dockerLinks
β62Updated 2 months ago
Alternatives and similar repositories for vllm-docker
Users that are interested in vllm-docker are comparing it to the libraries listed below
Sorting:
- π A deep-dive into HyDE for Advanced LLM RAG + π‘ Introducing AutoHyDE, a semi-supervised framework to improve the effectiveness, coveraβ¦β32Updated last year
- The backend behind the LLM-Perf Leaderboardβ10Updated last year
- β18Updated 10 months ago
- π Scale your RAG pipeline using Ragswift: A scalable centralized embeddings management platformβ38Updated last year
- β19Updated last year
- Machine Learning Serving focused on GenAI with simplicity as the top priority.β59Updated 2 months ago
- An OpenAI Completions API compatible server for NLP transformers modelsβ65Updated last year
- Hugging Face Inference Toolkit used to serve transformers, sentence-transformers, and diffusers models.β78Updated last week
- Using LlamaIndex with Ray for productionizing LLM applicationsβ71Updated last year
- Self-host LLMs with vLLM and BentoMLβ123Updated this week
- β53Updated last year
- Experimental Code for StructuredRAG: JSON Response Formatting with Large Language Modelsβ106Updated 2 months ago
- A Python wrapper around HuggingFace's TGI (text-generation-inference) and TEI (text-embedding-inference) servers.β33Updated last month
- Matrix (Multi-Agent daTa geneRation Infra and eXperimentation framework) is a versatile engine for multi-agent conversational data generaβ¦β69Updated this week
- Vector Database with support for late interaction and token level embeddings.β55Updated 8 months ago
- High level library for batched embeddings generation, blazingly-fast web-based RAG and quantized indexes processing β‘β66Updated 7 months ago
- Simple examples using Argilla tools to build AIβ53Updated 7 months ago
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMsβ87Updated this week
- β77Updated last year
- Evaluation of bm42 sparse indexing algorithmβ68Updated 11 months ago
- OpenMindedChatbot is a Proof Of Concept that leverages the power of Open source Large Language Models (LLM) with Function Calling capabilβ¦β29Updated last year
- Code for evaluating with Flow-Judge-v0.1 - an open-source, lightweight (3.8B) language model optimized for LLM system evaluations. Crafteβ¦β70Updated 7 months ago
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absoluteβ¦β49Updated 11 months ago
- A simple service that integrates vLLM with Ray Serve for fast and scalable LLM serving.β67Updated last year
- experiments with inference on llamaβ104Updated last year
- β66Updated last year
- β18Updated last year
- Using modal.com to process FineWeb-edu dataβ20Updated 2 months ago
- β41Updated last week
- β19Updated 8 months ago