runpod / runpod-python
š | Python library for RunPod API and serverless worker SDK.
ā216Updated 2 months ago
Alternatives and similar repositories for runpod-python:
Users that are interested in runpod-python are comparing it to the libraries listed below
- š§° | RunPod CLI for pod managementā290Updated 2 months ago
- š | A simple worker that can be used as a starting point to build your own custom RunPod Endpoint API worker.ā99Updated 4 months ago
- A curated list of amazing RunPod projects, libraries, and resourcesā108Updated 7 months ago
- š³ | Dockerfiles for the RunPod container images used for our official templates.ā172Updated last week
- The RunPod worker template for serving our large language model endpoints. Powered by vLLM.ā294Updated last week
- RunPod Serverless Worker for Oobabooga Text Generation API for LLMsā1Updated 10 months ago
- Running Ollama with Runpodā58Updated 7 months ago
- āļø | REPLACED BY https://github.com/runpod-workers | Official set of serverless worker provided by RunPod as endpoints.ā57Updated last year
- Automatic1111 serverless worker.ā85Updated 3 months ago
- ā52Updated last year
- Examples of models deployable with Trussā165Updated this week
- Vast.ai python and cli api clientā137Updated last week
- ā84Updated last year
- TheBloke's Dockerfilesā306Updated last year
- Text WebUI extension to add clever Notebooks to Chat modeā139Updated last year
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.ā37Updated last year
- XTTSv2 Extension for oobabooga text-generation-webuiā152Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRAā123Updated last year
- LoRA inference model packaged with Cogā74Updated last year
- ā27Updated last month
- Falcon LLM ggml framework with CPU and GPU supportā246Updated last year
- Made slight modifications to the Tortoise API, provided 3 additional scripts to make using Tortoise easier. Less focus on cloning makes sā¦ā52Updated 10 months ago
- 4 bits quantization of LLaMa using GPTQā130Updated last year
- Wheels for llama-cpp-python compiled with cuBLAS supportā96Updated last year
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.ā65Updated last year
- Some models defined with Cog to show you how it worksā151Updated 3 weeks ago
- š§ | RunPod worker of the faster-whisper model for Serverless Endpoint.ā89Updated last month
- ā66Updated 5 months ago
- The code we currently use to fine-tune models.ā114Updated 10 months ago