runpod-workers / worker-vllmLinks
The RunPod worker template for serving our large language model endpoints. Powered by vLLM.
β380Updated this week
Alternatives and similar repositories for worker-vllm
Users that are interested in worker-vllm are comparing it to the libraries listed below
Sorting:
- A fast batching API to serve LLM modelsβ188Updated last year
- π | Python library for RunPod API and serverless worker SDK.β257Updated this week
- The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM β¦β606Updated 9 months ago
- Examples of models deployable with Trussβ208Updated this week
- Convenience scripts to finetune (chat-)LLaMa3 and other models for any languageβ315Updated last year
- TheBloke's Dockerfilesβ307Updated last year
- function calling-based LLM agentsβ289Updated last year
- This is our own implementation of 'Layer Selective Rank Reduction'β239Updated last year
- β163Updated 3 months ago
- Tutorial for building LLM routerβ235Updated last year
- Low-Rank adapter extraction for fine-tuned transformers modelsβ178Updated last year
- A simple Python sandbox for helpful LLM data agentsβ292Updated last year
- Merge Transformers language models by use of gradient parameters.β208Updated last year
- A tool for generating function arguments and choosing what function to call with local LLMsβ434Updated last year
- A multimodal, function calling powered LLM webui.β216Updated last year
- Dagger functions to import Hugging Face GGUF models into a local ollama instance and optionally push them to ollama.com.β119Updated last year
- A benchmark for emotional intelligence in large language modelsβ378Updated last year
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAIβ221Updated last year
- Comparison of the output quality of quantization methods, using Llama 3, transformers, GGUF, EXL2.β165Updated last year
- Large-scale LLM inference engineβ1,591Updated this week
- One click templates for inferencing Language Modelsβ218Updated 3 months ago
- Dataset Crafting w/ RAG/Wikipedia ground truth and Efficient Fine-Tuning Using MLX and Unsloth. Includes configurable dataset annotation β¦β190Updated last year
- A bagel, with everything.β324Updated last year
- β106Updated 2 months ago
- Fast parallel LLM inference for MLXβ232Updated last year
- An OpenAI API compatible API for chat with image input and questions about the images. aka Multimodal.β265Updated 8 months ago
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytesβ¦β146Updated 2 years ago
- β472Updated last year
- Guide for fine-tuning Llama/Mistral/CodeLlama models and moreβ634Updated last month
- An OpenAI-like LLaMA inference APIβ113Updated 2 years ago