runpod-workers / worker-vllmLinks
The RunPod worker template for serving our large language model endpoints. Powered by vLLM.
β318Updated 3 weeks ago
Alternatives and similar repositories for worker-vllm
Users that are interested in worker-vllm are comparing it to the libraries listed below
Sorting:
- π | Python library for RunPod API and serverless worker SDK.β232Updated last week
- A fast batching API to serve LLM modelsβ181Updated last year
- The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM β¦β567Updated 3 months ago
- A multimodal, function calling powered LLM webui.β214Updated 8 months ago
- TheBloke's Dockerfilesβ303Updated last year
- A bagel, with everything.β320Updated last year
- Examples of models deployable with Trussβ171Updated this week
- This is our own implementation of 'Layer Selective Rank Reduction'β238Updated last year
- This code sets up a simple yet robust server using FastAPI for handling asynchronous requests for embedding generation and reranking taskβ¦β69Updated last year
- Web UI for ExLlamaV2β495Updated 4 months ago
- Comparison of the output quality of quantization methods, using Llama 3, transformers, GGUF, EXL2.β153Updated last year
- function calling-based LLM agentsβ285Updated 8 months ago
- β157Updated 10 months ago
- A curated list of amazing RunPod projects, libraries, and resourcesβ112Updated 9 months ago
- A benchmark for emotional intelligence in large language modelsβ302Updated 10 months ago
- Local LLM ReAct Agent with Guidanceβ158Updated 2 years ago
- Customizable implementation of the self-instruct paper.β1,043Updated last year
- β198Updated last year
- Low-Rank adapter extraction for fine-tuned transformers modelsβ171Updated last year
- OpenAI compatible API for TensorRT LLM triton backendβ208Updated 10 months ago
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAIβ222Updated last year
- Large-scale LLM inference engineβ1,440Updated last week
- Fast parallel LLM inference for MLXβ189Updated 10 months ago
- A simple Python sandbox for helpful LLM data agentsβ264Updated 11 months ago
- A python package for developing AI applications with local LLMs.β150Updated 5 months ago
- Merge Transformers language models by use of gradient parameters.β207Updated 9 months ago
- β52Updated last year
- πΉοΈ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.β137Updated 10 months ago
- Dagger functions to import Hugging Face GGUF models into a local ollama instance and optionally push them to ollama.com.β115Updated last year
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytesβ¦β147Updated last year