Firstbober / rocm-pytorch-gfx803-dockerLinks
A Docker image based on rocm/pytorch with support for gfx803(Polaris 20-21 (XT/PRO/XL); RX580; RX570; RX560) and Python 3.8
☆24Updated 2 years ago
Alternatives and similar repositories for rocm-pytorch-gfx803-docker
Users that are interested in rocm-pytorch-gfx803-docker are comparing it to the libraries listed below
Sorting:
- Run stable-diffusion-webui with Radeon RX 580 8GB on Ubuntu 22.04.2 LTS☆67Updated 2 years ago
- Install guide of ROCm and Tensorflow on Ubuntu for the RX580☆129Updated last year
- ☆236Updated 2 years ago
- Copy of rocm/pytorch with gfx803 cards compiled in (see https://github.com/xuhuisheng/rocm-build/blob/develop/docs/gfx803.md)☆20Updated 3 months ago
- build scripts for ROCm☆188Updated last year
- ROCm docker images with fixes/support for legecy architecture gfx803. eg.Radeon RX 590/RX 580/RX 570/RX 480☆80Updated 7 months ago
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆216Updated 3 weeks ago
- DLPrimitives/OpenCL out of tree backend for pytorch☆382Updated last month
- Make PyTorch models at least run on APUs.☆56Updated 2 years ago
- Stable Diffusion Docker image preconfigured for usage with AMD Radeon cards☆141Updated last year
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆661Updated this week
- AI Inferencing at the Edge. A simple one-file way to run various GGML models with KoboldAI's UI with AMD ROCm offloading☆721Updated last month
- AMD Ryzen™ AI Software includes the tools and runtime libraries for optimizing and deploying AI inference on AMD Ryzen™ AI powered PCs.☆717Updated last week
- Download models from the Ollama library, without Ollama☆117Updated last year
- Pure C++ implementation of several models for real-time chatting on your computer (CPU & GPU)☆758Updated this week
- Deep Learning Primitives and Mini-Framework for OpenCL☆205Updated last year
- Ollama model direct link generator and installer.☆226Updated 10 months ago
- 8-bit CUDA functions for PyTorch☆69Updated 3 months ago
- llama.cpp fork with additional SOTA quants and improved performance☆1,399Updated this week
- NVIDIA Linux open GPU with P2P support☆98Updated 3 weeks ago
- Ollama chat client in Vue, everything you need to do your private text rpg in browser☆135Updated last year
- Export and Backup Ollama models into GGUF and ModelFile☆89Updated last year
- ROCm Library Files for gfx1103 and update with others arches based on AMD GPUs for use in Windows.☆713Updated 3 months ago
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆118Updated 2 weeks ago
- High-speed and easy-use LLM serving framework for local deployment☆139Updated 4 months ago
- ☆48Updated 2 years ago
- Review/Check GGUF files and estimate the memory usage and maximum tokens per second.☆221Updated 4 months ago
- Input your VRAM and RAM and the toolchain will produce a GGUF model tuned to your system within seconds — flexible model sizing and lowes…☆69Updated last week
- AMD related optimizations for transformer models☆96Updated 2 months ago
- FORK of VLLM for AMD MI25/50/60. A high-throughput and memory-efficient inference and serving engine for LLMs☆65Updated 7 months ago