nikos230 / Run-Pytorch-with-AMD-Radeon-GPULinks
Complete Guide how to run Pytorch with AMD rx460,470,480 (gfx803) GPUs
☆45Updated 11 months ago
Alternatives and similar repositories for Run-Pytorch-with-AMD-Radeon-GPU
Users that are interested in Run-Pytorch-with-AMD-Radeon-GPU are comparing it to the libraries listed below
Sorting:
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆641Updated this week
- ☆419Updated 8 months ago
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS, Embedding and Rerank models over OpenAI endpoints.☆261Updated last week
- AMD Ryzen™ AI Software includes the tools and runtime libraries for optimizing and deploying AI inference on AMD Ryzen™ AI powered PCs.☆713Updated this week
- Guides left to us by the legendary SmokelessCPU before he decided to drop off the internet☆45Updated 6 months ago
- Running SXM2/SXM3/SXM4 NVidia data center GPUs in consumer PCs☆132Updated 2 years ago
- Juice Community Version Public Release☆619Updated 7 months ago
- ☆170Updated 3 weeks ago
- Run LLMs on AMD Ryzen™ AI NPUs in minutes. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.☆533Updated last week
- Run LLM Agents on Ryzen AI PCs in Minutes☆812Updated this week
- A manual for helping using tesla p40 gpu☆139Updated last year
- A complete package that provides you with all the components needed to get started of dive deeper into Machine Learning Workloads on Cons…☆42Updated last month
- Fresh builds of llama.cpp with AMD ROCm™ 7 acceleration☆139Updated last week
- build scripts for ROCm☆188Updated last year
- ☆499Updated this week
- Make PyTorch models at least run on APUs.☆56Updated 2 years ago
- Fork of ollama for vulkan support☆110Updated 10 months ago
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆216Updated 3 weeks ago
- ☆56Updated 2 years ago
- ☆236Updated 2 years ago
- No-code CLI designed for accelerating ONNX workflows☆221Updated 6 months ago
- ☆63Updated 7 months ago
- Limbo for Tensor is a QEMU-based Hypervisor for Tensor-based Google Pixel devices such as Pixel 6 & 7 & 8 series.☆225Updated 7 months ago
- Correctly setup LXC for Termux☆72Updated last year
- Windows installation guide for Pocophone F1☆45Updated last month
- Droidian unified ci builds☆215Updated last week
- DLPrimitives/OpenCL out of tree backend for pytorch☆383Updated 3 weeks ago
- The definitive GPU partitioning tool, taming the vendor specificity under a refined interface.☆10Updated 3 years ago
- ☆191Updated last year
- Making Apple Watch functional with Android☆104Updated 11 months ago