runpod-workers / worker-vllmLinks
The RunPod worker template for serving our large language model endpoints. Powered by vLLM.
☆327Updated this week
Alternatives and similar repositories for worker-vllm
Users that are interested in worker-vllm are comparing it to the libraries listed below
Sorting:
- A fast batching API to serve LLM models☆183Updated last year
- The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM …☆573Updated 4 months ago
- TheBloke's Dockerfiles☆305Updated last year
- 🐍 | Python library for RunPod API and serverless worker SDK.☆236Updated last week
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆222Updated last year
- Dagger functions to import Hugging Face GGUF models into a local ollama instance and optionally push them to ollama.com.☆115Updated last year
- A multimodal, function calling powered LLM webui.☆214Updated 9 months ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆239Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆173Updated last year
- One click templates for inferencing Language Models☆188Updated last week
- ☆157Updated 11 months ago
- Local LLM ReAct Agent with Guidance☆158Updated 2 years ago
- Open Source Text Embedding Models with OpenAI Compatible API☆153Updated 11 months ago
- ☆205Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆78Updated last year
- function calling-based LLM agents☆288Updated 9 months ago
- A bagel, with everything.☆321Updated last year
- Examples of models deployable with Truss☆184Updated last week
- ☆52Updated last year
- A simple Python sandbox for helpful LLM data agents☆267Updated last year
- ☆114Updated 6 months ago
- ☆199Updated last year
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hub☆162Updated last year
- ☆463Updated last year
- A curated list of amazing RunPod projects, libraries, and resources☆115Updated 10 months ago
- Fast parallel LLM inference for MLX☆193Updated 11 months ago
- Customizable implementation of the self-instruct paper.☆1,045Updated last year
- Merge Transformers language models by use of gradient parameters.☆206Updated 10 months ago
- Comparison of the output quality of quantization methods, using Llama 3, transformers, GGUF, EXL2.☆154Updated last year
- ☆900Updated 9 months ago