segurac / force-host-alloction-APULinks
☆59Updated 2 months ago
Alternatives and similar repositories for force-host-alloction-APU
Users that are interested in force-host-alloction-APU are comparing it to the libraries listed below
Sorting:
- Make PyTorch models at least run on APUs.☆54Updated last year
- ☆430Updated this week
- ☆356Updated 3 months ago
- General Site for the GFX803 ROCm Stuff☆89Updated 3 weeks ago
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆209Updated 4 months ago
- Fork of ollama for vulkan support☆87Updated 5 months ago
- Run stable-diffusion-webui with Radeon RX 580 8GB on Ubuntu 22.04.2 LTS☆64Updated last year
- Stable Diffusion Docker image preconfigured for usage with AMD Radeon cards☆136Updated last year
- llama.cpp + ROCm + llama-swap☆21Updated 5 months ago
- A set of utilities for monitoring and customizing GPU performance☆153Updated last year
- Remote development for OSS Builds of VSCode like VSCodium☆116Updated 3 months ago
- ☆32Updated 7 months ago
- ☆233Updated 2 years ago
- ROCm docker images with fixes/support for legecy architecture gfx803. eg.Radeon RX 590/RX 580/RX 570/RX 480☆70Updated last month
- A user-configurable utility for GPU vendor drivers enabling the registration of arbitrary mdev types with the VFIO-Mediated Device framew…☆58Updated 2 years ago
- Local LLM Server with GPU and NPU Acceleration☆206Updated this week
- Input text from speech in any Linux window, the lean, fast and accurate way, using whisper.cpp OFFLINE. Speak with local LLMs via llama.c…☆114Updated this week
- Lightweight Inference server for OpenVINO☆188Updated this week
- NVIDIA driver installation on Clear Linux☆22Updated 7 months ago
- A daemon that automatically manages the performance states of NVIDIA GPUs.☆89Updated last month
- ☆40Updated last month
- Onboarding documentation source for the AMD Ryzen™ AI Software Platform. The AMD Ryzen™ AI Software Platform enables developers to take…☆68Updated this week
- build scripts for ROCm☆186Updated last year
- Python script to control the fan speed of Nvidia GPUs under Linux. Keep it simple stupid.☆51Updated 8 months ago
- No-code CLI designed for accelerating ONNX workflows☆201Updated last month
- My develoopment fork of llama.cpp. For now working on RK3588 NPU and Tenstorrent backend☆97Updated 2 weeks ago
- ☆54Updated last year
- ☆59Updated last year
- AMD APU compatible Ollama. Get up and running with Llama 3.3, DeepSeek-R1, Phi-4, Gemma 3, Mistral Small 3.1 and other large language mod…☆63Updated this week
- Stable Diffusion GUI written in C++☆60Updated 3 months ago