Firstbober / rocm-pytorch-gfx803-dockerLinks
A Docker image based on rocm/pytorch with support for gfx803(Polaris 20-21 (XT/PRO/XL); RX580; RX570; RX560) and Python 3.8
☆24Updated 2 years ago
Alternatives and similar repositories for rocm-pytorch-gfx803-docker
Users that are interested in rocm-pytorch-gfx803-docker are comparing it to the libraries listed below
Sorting:
- Run stable-diffusion-webui with Radeon RX 580 8GB on Ubuntu 22.04.2 LTS☆68Updated 2 years ago
- Copy of rocm/pytorch with gfx803 cards compiled in (see https://github.com/xuhuisheng/rocm-build/blob/develop/docs/gfx803.md)☆20Updated 3 months ago
- ROCm docker images with fixes/support for extra architectures, such as gfx803/gfx1010.☆31Updated 2 years ago
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆216Updated last week
- Fork of ollama for vulkan support☆110Updated 9 months ago
- DLPrimitives/OpenCL out of tree backend for pytorch☆378Updated 2 weeks ago
- Make PyTorch models at least run on APUs.☆56Updated last year
- NVIDIA Linux open GPU with P2P support☆94Updated last week
- Stable Diffusion Docker image preconfigured for usage with AMD Radeon cards☆141Updated last year
- 8-bit CUDA functions for PyTorch☆68Updated 2 months ago
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆613Updated this week
- Convert downloaded Ollama models back into their GGUF equivalent format☆66Updated 11 months ago
- Triton for AMD MI25/50/60. Development repository for the Triton language and compiler☆32Updated 2 weeks ago
- ROCm docker images with fixes/support for legecy architecture gfx803. eg.Radeon RX 590/RX 580/RX 570/RX 480☆77Updated 6 months ago
- A guide to Intel Arc-enabled (maybe) version of @AUTOMATIC1111/stable-diffusion-webui☆55Updated 2 years ago
- Stable Diffusion ComfyUI Docker/OCI Image for Intel Arc GPUs☆46Updated last month
- Input your VRAM and RAM and the toolchain will produce a GGUF model tuned to your system within seconds — flexible model sizing and lowes…☆66Updated this week
- vLLM for AMD gfx906 GPUs, e.g. Radeon VII / MI50 / MI60☆338Updated this week
- AMD related optimizations for transformer models☆96Updated last month
- Download models from the Ollama library, without Ollama☆115Updated last year
- My develoopment fork of llama.cpp. For now working on RK3588 NPU and Tenstorrent backend☆110Updated 3 weeks ago
- High-speed and easy-use LLM serving framework for local deployment☆137Updated 4 months ago
- 8-bit CUDA functions for PyTorch, ported to HIP for use in AMD GPUs☆51Updated 2 years ago
- Tool to download models from Huggingface Hub and convert them to GGML/GGUF for llama.cpp☆163Updated 7 months ago
- AI Inferencing at the Edge. A simple one-file way to run various GGML models with KoboldAI's UI with AMD ROCm offloading☆718Updated 3 weeks ago
- Example code and documentation on how to get Stable Diffusion running with ONNX FP16 models on DirectML. Can run accelerated on all Direc…☆301Updated 2 years ago
- stable-diffusion.cpp bindings for python☆80Updated 2 weeks ago
- llama.cpp fork with additional SOTA quants and improved performance☆1,358Updated last week
- Fork of ollama for vulkan support☆20Updated 7 months ago
- llama.cpp tutorial on Android phone☆138Updated 7 months ago