woodrex83 / ROCm-For-RX580Links
ROCm docker images with fixes/support for legecy architecture gfx803. eg.Radeon RX 590/RX 580/RX 570/RX 480
☆79Updated 7 months ago
Alternatives and similar repositories for ROCm-For-RX580
Users that are interested in ROCm-For-RX580 are comparing it to the libraries listed below
Sorting:
- General Site for the GFX803 ROCm Stuff☆129Updated 3 months ago
- A manual for helping using tesla p40 gpu☆139Updated last year
- ☆419Updated 8 months ago
- Fork of ollama for vulkan support☆110Updated 10 months ago
- Run stable-diffusion-webui with Radeon RX 580 8GB on Ubuntu 22.04.2 LTS☆67Updated 2 years ago
- ☆236Updated 2 years ago
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆216Updated 3 weeks ago
- Fresh builds of llama.cpp with AMD ROCm™ 7 acceleration☆139Updated last week
- Installation script for an AI applications using ROCm on Linux.☆34Updated this week
- ☆63Updated 7 months ago
- Stable Diffusion Docker image preconfigured for usage with AMD Radeon cards☆141Updated last year
- Run LLMs on AMD Ryzen™ AI NPUs in minutes. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.☆533Updated last week
- A library and CLI utilities for managing performance states of NVIDIA GPUs.☆31Updated last year
- Using a Tesla P40 for Gaming with an Intel iGPU as Display Output on Windows 11 22H2☆36Updated 2 years ago
- Install guide of ROCm and Tensorflow on Ubuntu for the RX580☆129Updated last year
- A daemon that automatically manages the performance states of NVIDIA GPUs.☆101Updated last month
- ☆48Updated 2 years ago
- A zero dependency web UI for any LLM backend, including KoboldCpp, OpenAI and AI Horde☆147Updated this week
- Make PyTorch models at least run on APUs.☆56Updated 2 years ago
- GIMP AI plugins with OpenVINO Backend☆716Updated this week
- GPU Power and Performance Manager☆62Updated last year
- ☆110Updated this week
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS, Embedding and Rerank models over OpenAI endpoints.☆261Updated last week
- Stable Diffusion and Flux in pure C/C++☆24Updated last week
- ☆88Updated last week
- Run Pytorch with ROCm hardware acceleration on an RX590 (or similar GPU)☆23Updated 2 years ago
- AI Inferencing at the Edge. A simple one-file way to run various GGML models with KoboldAI's UI with AMD ROCm offloading☆721Updated last month
- AMD APU compatible Ollama. Get up and running with OpenAI gpt-oss, DeepSeek-R1, Gemma 3 and other models.☆136Updated last week
- Croco.Cpp is fork of KoboldCPP infering GGML/GGUF models on CPU/Cuda with KoboldAI's UI. It's powered partly by IK_LLama.cpp, and compati…☆154Updated last week
- tutorial (including docker files and docker composes) for how to get GPU pass through working for docker on WSL2 Windows for AMD ROCm GPU…☆26Updated 7 months ago