woodrex83 / ROCm-For-RX580Links
ROCm docker images with fixes/support for legecy architecture gfx803. eg.Radeon RX 590/RX 580/RX 570/RX 480
☆74Updated 3 months ago
Alternatives and similar repositories for ROCm-For-RX580
Users that are interested in ROCm-For-RX580 are comparing it to the libraries listed below
Sorting:
- Run stable-diffusion-webui with Radeon RX 580 8GB on Ubuntu 22.04.2 LTS☆65Updated last year
- General Site for the GFX803 ROCm Stuff☆107Updated this week
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆209Updated 6 months ago
- AMD APU compatible Ollama. Get up and running with Llama 3.3, DeepSeek-R1, Phi-4, Gemma 3, Mistral Small 3.1 and other large language mod…☆85Updated this week
- A script that automatically installs all the required stuff to run selected AI interfaces on AMD Radeon 7900XTX.☆27Updated this week
- Croco.Cpp is fork of KoboldCPP infering GGML/GGUF models on CPU/Cuda with KoboldAI's UI. It's powered partly by IK_LLama.cpp, and compati…☆135Updated this week
- Fork of ollama for vulkan support☆100Updated 6 months ago
- AI Inferencing at the Edge. A simple one-file way to run various GGML models with KoboldAI's UI with AMD ROCm offloading☆682Updated 2 weeks ago
- ☆381Updated 4 months ago
- Lightweight Inference server for OpenVINO☆202Updated this week
- A manual for helping using tesla p40 gpu☆129Updated 9 months ago
- Core, Junction, and VRAM temperature reader for Linux + GDDR6/GDDR6X GPUs☆52Updated 3 months ago
- A daemon that automatically manages the performance states of NVIDIA GPUs.☆94Updated last week
- Run LLMs on AMD Ryzen™ AI NPUs. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.☆148Updated this week
- Stable Diffusion Docker image preconfigured for usage with AMD Radeon cards☆138Updated last year
- A library and CLI utilities for managing performance states of NVIDIA GPUs.☆28Updated 10 months ago
- ☆82Updated this week
- A zero dependency web UI for any LLM backend, including KoboldCpp, OpenAI and AI Horde☆133Updated this week
- Run Pytorch with ROCm hardware acceleration on an RX590 (or similar GPU)☆23Updated 2 years ago
- Prebuilt Windows ROCm Libs for gfx1031 and gfx1032☆159Updated 5 months ago
- Docker configuration for koboldcpp☆34Updated last year
- ☆64Updated 3 months ago
- Prometheus exporter for Linux based GDDR6/GDDR6X VRAM and GPU Core Hot spot temperature reader for NVIDIA RTX 3000/4000 series GPUs.☆22Updated 10 months ago
- Stable Diffusion ComfyUI Docker/OCI Image for Intel Arc GPUs☆46Updated 7 months ago
- add support on amd in zluda☆71Updated last month
- ☆121Updated 9 months ago
- Clean and intuitive LLM roleplay client☆113Updated last week
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆484Updated this week
- ☆42Updated 2 years ago
- The default client software to create images for the AI-Horde☆136Updated last month