YellowRoseCx / koboldcpp-rocm
AI Inferencing at the Edge. A simple one-file way to run various GGML models with KoboldAI's UI with AMD ROCm offloading
☆559Updated this week
Alternatives and similar repositories for koboldcpp-rocm:
Users that are interested in koboldcpp-rocm are comparing it to the libraries listed below
- The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface. Now ZLUDA enhanced for better AMD GPU p…☆295Updated this week
- Prebuilt Windows ROCm Libs for gfx1031 and gfx1032☆106Updated 2 weeks ago
- CUDA on AMD GPUs☆392Updated last week
- Forge for stable-diffusion-webui-amdgpu (formerly stable-diffusion-webui-directml)☆89Updated last month
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆196Updated last week
- An OAI compatible exllamav2 API that's both lightweight and fast☆822Updated last week
- Croco.Cpp is a 3rd party testground for KoboldCPP, a simple one-file way to run various GGML/GGUF models with KoboldAI's UI. (for Croco.C…☆99Updated this week
- Web UI for ExLlamaV2☆484Updated last month
- KoboldAI is generative AI software optimized for fictional use, but capable of much more!☆390Updated last month
- A zero dependency web UI for any LLM backend, including KoboldCpp, OpenAI and AI Horde☆103Updated this week
- Run GGUF models easily with a KoboldAI UI. One File. Zero Install.☆6,682Updated this week
- add support on amd in zluda☆45Updated last month
- ☆606Updated 2 weeks ago
- LLM Frontend in a single html file☆378Updated last month
- Extensions API for SillyTavern.☆589Updated 2 months ago
- DEPRECATED!☆53Updated 8 months ago
- An extension for SillyTavern that lets characters think before responding☆94Updated last week
- Stable Diffusion Knowledge Base (Setups, Basics, Guides and more)☆61Updated 2 weeks ago
- a self-hosted webui for 30+ generative ai☆559Updated this week
- ROCm Library Files for gfx1103 and update with others arches based on AMD GPUs for use in Windows.☆371Updated last month
- transparent proxy server for llama.cpp's server to provide automatic model swapping☆204Updated this week
- A simple FastAPI Server to run XTTSv2☆480Updated 7 months ago
- This repo turns your PC into a AI Horde worker node☆260Updated last month
- A Quick Reply Set for Sillytavern to gently guide the Model output☆45Updated 2 months ago
- SHARK Studio -- Web UI for SHARK+IREE High Performance Machine Learning Distribution☆1,436Updated 4 months ago
- My personal fork of koboldcpp where I hack in experimental samplers.☆44Updated 9 months ago
- ☆154Updated last year
- Stable Diffusion Docker image preconfigured for usage with AMD Radeon cards☆132Updated 9 months ago
- Example code and documentation on how to get Stable Diffusion running with ONNX FP16 models on DirectML. Can run accelerated on all Direc…☆298Updated last year
- Launcher scripts for SillyTavern and ST-Extras.☆297Updated this week