Firstbober / rocm-pytorch-gfx803-dockerLinks
A Docker image based on rocm/pytorch with support for gfx803(Polaris 20-21 (XT/PRO/XL); RX580; RX570; RX560) and Python 3.8
☆24Updated 2 years ago
Alternatives and similar repositories for rocm-pytorch-gfx803-docker
Users that are interested in rocm-pytorch-gfx803-docker are comparing it to the libraries listed below
Sorting:
- Run stable-diffusion-webui with Radeon RX 580 8GB on Ubuntu 22.04.2 LTS☆64Updated last year
- Install guide of ROCm and Tensorflow on Ubuntu for the RX580☆126Updated 9 months ago
- ☆233Updated 2 years ago
- Copy of rocm/pytorch with gfx803 cards compiled in (see https://github.com/xuhuisheng/rocm-build/blob/develop/docs/gfx803.md)☆21Updated 4 years ago
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆209Updated 4 months ago
- AI Inferencing at the Edge. A simple one-file way to run various GGML models with KoboldAI's UI with AMD ROCm offloading☆647Updated 2 weeks ago
- DLPrimitives/OpenCL out of tree backend for pytorch☆356Updated 10 months ago
- Stable Diffusion Docker image preconfigured for usage with AMD Radeon cards☆136Updated last year
- Pure C++ implementation of several models for real-time chatting on your computer (CPU & GPU)☆656Updated this week
- AMD Ryzen™ AI Software includes the tools and runtime libraries for optimizing and deploying AI inference on AMD Ryzen™ AI powered PCs.☆555Updated last week
- llama.cpp fork with additional SOTA quants and improved performance☆652Updated this week
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆222Updated this week
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆31Updated this week
- Onboarding documentation source for the AMD Ryzen™ AI Software Platform. The AMD Ryzen™ AI Software Platform enables developers to take…☆68Updated 2 weeks ago
- Export and Backup Ollama models into GGUF and ModelFile☆75Updated 10 months ago
- My develoopment fork of llama.cpp. For now working on RK3588 NPU and Tenstorrent backend☆97Updated 2 weeks ago
- Download models from the Ollama library, without Ollama☆89Updated 8 months ago
- Ollama model direct link generator and installer.☆203Updated 5 months ago
- Review/Check GGUF files and estimate the memory usage and maximum tokens per second.☆185Updated this week
- LM inference server implementation based on *.cpp.☆233Updated this week
- Make PyTorch models at least run on APUs.☆54Updated last year
- Example code and documentation on how to get Stable Diffusion running with ONNX FP16 models on DirectML. Can run accelerated on all Direc…☆300Updated last year
- FORK of VLLM for AMD MI25/50/60. A high-throughput and memory-efficient inference and serving engine for LLMs☆52Updated 2 months ago
- AMD related optimizations for transformer models☆80Updated 3 weeks ago
- A Python package for extending the official PyTorch that can easily obtain performance on Intel platform☆47Updated 7 months ago
- Prebuilt Windows ROCm Libs for gfx1031 and gfx1032☆147Updated 3 months ago
- A Pure Rust based LLM (Any LLM based MLLM such as Spark-TTS) Inference Engine, powering by Candle framework.☆131Updated last month
- Simple go utility to download HuggingFace Models and Datasets☆697Updated 8 months ago
- llama.cpp tutorial on Android phone☆112Updated 2 months ago
- ☆43Updated this week