woodrex83 / ROCm-For-RX580Links
ROCm docker images with fixes/support for legecy architecture gfx803. eg.Radeon RX 590/RX 580/RX 570/RX 480
☆76Updated 6 months ago
Alternatives and similar repositories for ROCm-For-RX580
Users that are interested in ROCm-For-RX580 are comparing it to the libraries listed below
Sorting:
- General Site for the GFX803 ROCm Stuff☆126Updated 2 months ago
- ☆234Updated 2 years ago
- Run stable-diffusion-webui with Radeon RX 580 8GB on Ubuntu 22.04.2 LTS☆68Updated 2 years ago
- Fork of ollama for vulkan support☆107Updated 9 months ago
- Install guide of ROCm and Tensorflow on Ubuntu for the RX580☆129Updated last year
- A daemon that automatically manages the performance states of NVIDIA GPUs.☆97Updated 2 weeks ago
- Make PyTorch models at least run on APUs.☆57Updated last year
- ☆63Updated 6 months ago
- AI Inferencing at the Edge. A simple one-file way to run various GGML models with KoboldAI's UI with AMD ROCm offloading☆714Updated last month
- ☆414Updated 7 months ago
- Stable Diffusion Docker image preconfigured for usage with AMD Radeon cards☆140Updated last year
- Prebuilt Windows ROCm Libs for gfx1031 and gfx1032☆165Updated 8 months ago
- Fresh builds of llama.cpp with AMD ROCm™ 7 acceleration☆103Updated this week
- A zero dependency web UI for any LLM backend, including KoboldCpp, OpenAI and AI Horde☆142Updated this week
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆216Updated 2 weeks ago
- Croco.Cpp is fork of KoboldCPP infering GGML/GGUF models on CPU/Cuda with KoboldAI's UI. It's powered partly by IK_LLama.cpp, and compati…☆153Updated this week
- Run LLMs on AMD Ryzen™ AI NPUs in minutes. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.☆451Updated this week
- Installation script for AI applications for AMD Radeon cards using ROCm.☆32Updated this week
- A library and CLI utilities for managing performance states of NVIDIA GPUs.☆31Updated last year
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆112Updated last week
- Core, Junction, and VRAM temperature reader for Linux + GDDR6/GDDR6X GPUs☆57Updated 3 weeks ago
- Stable Diffusion and Flux in pure C/C++☆22Updated this week
- A manual for helping using tesla p40 gpu☆137Updated last year
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS, Embedding and Rerank models over OpenAI endpoints.☆241Updated last week
- add support on amd in zluda☆76Updated 4 months ago
- Stable Diffusion ComfyUI Docker/OCI Image for Intel Arc GPUs☆46Updated last month
- Stable Diffusion GUI written in C++☆77Updated last month
- AMD APU compatible Ollama. Get up and running with OpenAI gpt-oss, DeepSeek-R1, Gemma 3 and other models.☆120Updated last week
- llama.cpp fork with additional SOTA quants and improved performance☆1,329Updated this week
- llama.cpp fork with additional SOTA quants and improved performance☆21Updated this week