jrcichra / rocm-pytorch-gfx803Links
Copy of rocm/pytorch with gfx803 cards compiled in (see https://github.com/xuhuisheng/rocm-build/blob/develop/docs/gfx803.md)
☆20Updated last month
Alternatives and similar repositories for rocm-pytorch-gfx803
Users that are interested in rocm-pytorch-gfx803 are comparing it to the libraries listed below
Sorting:
- A Docker image based on rocm/pytorch with support for gfx803(Polaris 20-21 (XT/PRO/XL); RX580; RX570; RX560) and Python 3.8☆24Updated 2 years ago
- Run stable-diffusion-webui with Radeon RX 580 8GB on Ubuntu 22.04.2 LTS☆67Updated last year
- A install guide for the RX580☆38Updated 4 years ago
- ☆234Updated 2 years ago
- Install guide of ROCm and Tensorflow on Ubuntu for the RX580☆127Updated last year
- build scripts for ROCm☆186Updated last year
- Make PyTorch models at least run on APUs.☆56Updated last year
- 8-bit CUDA functions for PyTorch, ported to HIP for use in AMD GPUs☆51Updated 2 years ago
- Deep Learning Primitives and Mini-Framework for OpenCL☆203Updated last year
- ROCm docker images with fixes/support for legecy architecture gfx803. eg.Radeon RX 590/RX 580/RX 570/RX 480☆76Updated 5 months ago
- ☆47Updated 2 years ago
- DLPrimitives/OpenCL out of tree backend for pytorch☆373Updated last year
- Fork of ollama for vulkan support☆105Updated 8 months ago
- 8-bit CUDA functions for PyTorch☆66Updated last month
- ☆37Updated 2 years ago
- A set of utilities for monitoring and customizing GPU performance☆153Updated last year
- a fork that installs runs on pytorch cpu-only☆213Updated 2 years ago
- ☆56Updated 2 years ago
- Y'all thought the dead internet theory wasn't real, but HERE IT IS☆16Updated last year
- Embeddings focused small version of Llama NLP model☆105Updated 2 years ago
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆212Updated last week
- Fast inference of Instruct tuned LLaMa on your personal devices.☆23Updated 2 years ago
- A collection of Arch Linux PKGBUILDS for the ROCm platform☆388Updated 7 months ago
- ☆64Updated last year
- ☆84Updated 3 weeks ago
- Docker configuration for koboldcpp☆36Updated last year
- Run multiple resource-heavy Large Models (LM) on the same machine with limited amount of VRAM/other resources by exposing them on differe…☆81Updated last week
- General Site for the GFX803 ROCm Stuff☆120Updated 2 months ago
- No-messing-around sh client for llama.cpp's server☆30Updated last year
- LLM inference in C/C++☆23Updated last year