hqnicolas / StableDiffusionROCmLinks
Stable Diffusion v1.7.0 & v1.9.3 & v1.10.1 on RDNA2 RDNA3 AMD ROCm with Docker-compose
☆13Updated last year
Alternatives and similar repositories for StableDiffusionROCm
Users that are interested in StableDiffusionROCm are comparing it to the libraries listed below
Sorting:
- General Site for the GFX803 ROCm Stuff☆134Updated 4 months ago
- Make use of Intel Arc Series GPU to Run Ollama, StableDiffusion, Whisper and Open WebUI, for image generation, speech recognition and int…☆217Updated last month
- AMD APU compatible Ollama. Get up and running with OpenAI gpt-oss, DeepSeek-R1, Gemma 3 and other models.☆138Updated 2 weeks ago
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆216Updated last month
- ☆257Updated 7 months ago
- Input text from speech in any Linux window, the lean, fast and accurate way, using whisper.cpp OFFLINE. Speak with local LLMs via llama.c…☆154Updated 5 months ago
- A complete package that provides you with all the components needed to get started of dive deeper into Machine Learning Workloads on Cons…☆44Updated 2 months ago
- llama-swap + a minimal ollama compatible api☆38Updated this week
- Fork of ollama for vulkan support☆109Updated 10 months ago
- A library and CLI utilities for managing performance states of NVIDIA GPUs.☆31Updated last year
- Linux distro for AI computers. Go from bare-metal GPUs to running AI workloads - like vLLM, SGLang, RAG, and Agents - in minutes, fully a…☆326Updated last month
- Chat with AI using whisper, LLMs, and TTS☆24Updated last year
- Hyprland atomic desktop☆35Updated 2 weeks ago
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS, Embedding and Rerank models over OpenAI endpoints.☆267Updated this week
- DEPRECATED!☆50Updated last year
- Docker variants of oobabooga's text-generation-webui, including pre-built images.☆443Updated 2 months ago
- ☆718Updated last week
- ☆50Updated 2 months ago
- A daemon that automatically manages the performance states of NVIDIA GPUs.☆104Updated 2 months ago
- LLM Benchmark for Throughput via Ollama (Local LLMs)☆319Updated this week
- Handy tool to measure the performance and efficiency of LLMs workloads.☆73Updated 8 months ago
- General Tool-calling API Proxy☆55Updated 5 months ago
- archiso with zfs embedded☆73Updated this week
- Run LLMs on AMD Ryzen™ AI NPUs in minutes. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.☆572Updated this week
- Non-intimidating guide to create a KVM GPU Passthrough via libvirt/virt-manager on systems with only one GPU.☆119Updated 2 weeks ago
- Synchronize snapper snapshots to a borg repository☆59Updated 2 weeks ago
- A small guide to help user correctly passthrough their GPUs to an unprivileged LXC container☆28Updated 9 months ago
- Guide to run Virtual Machines with KVM and QEMU☆127Updated last month
- Docker Compose for Stable Diffusion ROCm☆33Updated 2 months ago
- Fedora Atomic images for wayland compositors☆276Updated this week