runpod-workers / worker-vllmLinks
The RunPod worker template for serving our large language model endpoints. Powered by vLLM.
β386Updated this week
Alternatives and similar repositories for worker-vllm
Users that are interested in worker-vllm are comparing it to the libraries listed below
Sorting:
- π | Python library for RunPod API and serverless worker SDK.β258Updated 2 weeks ago
- A fast batching API to serve LLM modelsβ189Updated last year
- Examples of models deployable with Trussβ212Updated this week
- Convenience scripts to finetune (chat-)LLaMa3 and other models for any languageβ315Updated last year
- TheBloke's Dockerfilesβ308Updated last year
- β207Updated last year
- function calling-based LLM agentsβ289Updated last year
- The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM β¦β610Updated 9 months ago
- Dataset Crafting w/ RAG/Wikipedia ground truth and Efficient Fine-Tuning Using MLX and Unsloth. Includes configurable dataset annotation β¦β190Updated last year
- β164Updated 4 months ago
- One click templates for inferencing Language Modelsβ221Updated 2 weeks ago
- Tutorial for building LLM routerβ236Updated last year
- A simple Python sandbox for helpful LLM data agentsβ297Updated last year
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAIβ222Updated last year
- An OpenAI API compatible API for chat with image input and questions about the images. aka Multimodal.β266Updated 9 months ago
- β50Updated 2 years ago
- This is our own implementation of 'Layer Selective Rank Reduction'β240Updated last year
- Open Source Text Embedding Models with OpenAI Compatible APIβ164Updated last year
- An OpenAI-like LLaMA inference APIβ113Updated 2 years ago
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytesβ¦β146Updated 2 years ago
- β198Updated last year
- A benchmark for emotional intelligence in large language modelsβ389Updated last year
- A multimodal, function calling powered LLM webui.β217Updated last year
- Low-Rank adapter extraction for fine-tuned transformers modelsβ179Updated last year
- β164Updated 9 months ago
- Merge Transformers language models by use of gradient parameters.β209Updated last year
- Dagger functions to import Hugging Face GGUF models into a local ollama instance and optionally push them to ollama.com.β119Updated last year
- The easiest, and fastest way to run AI-generated Python code safelyβ341Updated last year
- FineTune LLMs in few lines of code (Text2Text, Text2Speech, Speech2Text)β246Updated last year
- Comparison of the output quality of quantization methods, using Llama 3, transformers, GGUF, EXL2.β165Updated last year