pomoke / torch-apu-helperLinks
Make PyTorch models at least run on APUs.
☆55Updated last year
Alternatives and similar repositories for torch-apu-helper
Users that are interested in torch-apu-helper are comparing it to the libraries listed below
Sorting:
- ☆62Updated 2 months ago
- ☆446Updated this week
- ☆372Updated 4 months ago
- build scripts for ROCm☆186Updated last year
- ☆233Updated 2 years ago
- Deep Learning Primitives and Mini-Framework for OpenCL☆199Updated 11 months ago
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆269Updated this week
- ☆54Updated last year
- Because RKNPU only knows 4D☆37Updated last year
- DLPrimitives/OpenCL out of tree backend for pytorch☆362Updated 11 months ago
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆209Updated 5 months ago
- Onboarding documentation source for the AMD Ryzen™ AI Software Platform. The AMD Ryzen™ AI Software Platform enables developers to take…☆73Updated last week
- A set of utilities for monitoring and customizing GPU performance☆155Updated last year
- Fork of ollama for vulkan support☆90Updated 5 months ago
- ☆56Updated 2 years ago
- ☆18Updated 7 months ago
- Run LLMs on AMD Ryzen™ AI NPUs. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.☆65Updated this week
- Lightweight Inference server for OpenVINO☆191Updated 2 weeks ago
- General Site for the GFX803 ROCm Stuff☆94Updated last week
- Efficient Inference of Transformer models☆442Updated last year
- Intel® NPU (Neural Processing Unit) Driver☆294Updated last week
- ROCm docker images with fixes/support for legecy architecture gfx803. eg.Radeon RX 590/RX 580/RX 570/RX 480☆71Updated 2 months ago
- General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). …☆51Updated 5 months ago
- ☆59Updated last year
- ☆124Updated 3 weeks ago
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆72Updated 6 months ago
- Customized ACPI method for overriding mobile AMD APU STAPM values☆38Updated 6 years ago
- Lemonade helps users run local LLMs with the highest performance by configuring state-of-the-art inference engines for their NPUs and GPU…☆381Updated this week
- Download models from the Ollama library, without Ollama☆90Updated 8 months ago
- ☆58Updated last week