ulyssesrr / docker-rocm-xtraLinks
ROCm docker images with fixes/support for extra architectures, such as gfx803/gfx1010.
☆31Updated 2 years ago
Alternatives and similar repositories for docker-rocm-xtra
Users that are interested in docker-rocm-xtra are comparing it to the libraries listed below
Sorting:
- Run stable-diffusion-webui with Radeon RX 580 8GB on Ubuntu 22.04.2 LTS☆66Updated last year
- ☆230Updated 2 years ago
- Stable Diffusion Docker image preconfigured for usage with AMD Radeon cards☆138Updated last year
- General Site for the GFX803 ROCm Stuff☆117Updated last month
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆211Updated 3 weeks ago
- Install guide of ROCm and Tensorflow on Ubuntu for the RX580☆126Updated last year
- ☆399Updated 6 months ago
- AI Inferencing at the Edge. A simple one-file way to run various GGML models with KoboldAI's UI with AMD ROCm offloading☆699Updated last month
- A Python package for extending the official PyTorch that can easily obtain performance on Intel platform☆47Updated 9 months ago
- ROCm docker images with fixes/support for legecy architecture gfx803. eg.Radeon RX 590/RX 580/RX 570/RX 480☆76Updated 4 months ago
- Prebuilt Windows ROCm Libs for gfx1031 and gfx1032☆161Updated 6 months ago
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆106Updated last week
- CUDA on AMD GPUs☆568Updated last month
- ROCm Library Files for gfx1103 and update with others arches based on AMD GPUs for use in Windows.☆632Updated 2 weeks ago
- Stable Diffusion web UI☆330Updated last year
- llama.cpp fork with additional SOTA quants and improved performance☆1,246Updated this week
- build scripts for ROCm☆186Updated last year
- Adapt IPEX to CUDA☆34Updated 2 weeks ago
- AMD ROCm Installation Guide on RX 6600 XT + TensorFlow and PyTorch☆78Updated 2 years ago
- A guide to Intel Arc-enabled (maybe) version of @AUTOMATIC1111/stable-diffusion-webui☆55Updated 2 years ago
- Fast inference engine for Transformer models☆47Updated 11 months ago
- Run LLMs on AMD Ryzen™ AI NPUs in minutes. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.☆280Updated this week
- Fork of ollama for vulkan support☆103Updated 7 months ago
- The official API server for Exllama. OAI compatible, lightweight, and fast.☆1,061Updated this week
- Stable Diffusion ComfyUI Docker/OCI Image for Intel Arc GPUs☆46Updated last month
- A simple webui for stable-diffusion.cpp☆42Updated last week
- AMD APU compatible Ollama. Get up and running with OpenAI gpt-oss, DeepSeek-R1, Gemma 3 and other models.☆108Updated this week
- Core, Junction, and VRAM temperature reader for Linux + GDDR6/GDDR6X GPUs☆54Updated 4 months ago
- Simple monkeypatch to boost AMD Navi 3 GPUs☆46Updated 5 months ago
- Croco.Cpp is fork of KoboldCPP infering GGML/GGUF models on CPU/Cuda with KoboldAI's UI. It's powered partly by IK_LLama.cpp, and compati…☆147Updated this week