segurac / force-host-alloction-APULinks
☆63Updated 6 months ago
Alternatives and similar repositories for force-host-alloction-APU
Users that are interested in force-host-alloction-APU are comparing it to the libraries listed below
Sorting:
- Make PyTorch models at least run on APUs.☆56Updated last year
- Fork of ollama for vulkan support☆110Updated 9 months ago
- AMD APU compatible Ollama. Get up and running with OpenAI gpt-oss, DeepSeek-R1, Gemma 3 and other models.☆133Updated last week
- ComfyUI with AMD ROCm support for GPU-accelerated AI image generation on AMD RX 6000/7000+ GPUs☆29Updated 4 months ago
- ☆496Updated this week
- ☆235Updated 2 years ago
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS, Embedding and Rerank models over OpenAI endpoints.☆260Updated last week
- ☆418Updated 8 months ago
- General Site for the GFX803 ROCm Stuff☆127Updated 3 months ago
- ☆56Updated 2 years ago
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆216Updated last week
- ☆83Updated 4 years ago
- build scripts for ROCm☆188Updated last year
- ROCm docker images with fixes/support for legecy architecture gfx803. eg.Radeon RX 590/RX 580/RX 570/RX 480☆77Updated 6 months ago
- llama-swap + a minimal ollama compatible api☆37Updated this week
- ☆18Updated 11 months ago
- ☆176Updated last month
- Hacking in a V4L2 M2M decoder for AMLogic SoCs☆18Updated 6 years ago
- Run LLMs on AMD Ryzen™ AI NPUs in minutes. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.☆488Updated last week
- See how to play with ROCm, run it with AMD GPUs!☆38Updated 6 months ago
- Input text from speech in any Linux window, the lean, fast and accurate way, using whisper.cpp OFFLINE. Speak with local LLMs via llama.c…☆152Updated 4 months ago
- Run multiple resource-heavy Large Models (LM) on the same machine with limited amount of VRAM/other resources by exposing them on differe…☆84Updated last week
- Fresh builds of llama.cpp with AMD ROCm™ 7 acceleration☆129Updated this week
- Remote development for OSS Builds of VSCode like VSCodium☆121Updated 8 months ago
- A set of utilities for monitoring and customizing GPU performance☆153Updated last year
- Kernel/Qemu Patches for Venus.☆35Updated last year
- Run stable-diffusion-webui with Radeon RX 580 8GB on Ubuntu 22.04.2 LTS☆68Updated 2 years ago
- FORK of VLLM for AMD MI25/50/60. A high-throughput and memory-efficient inference and serving engine for LLMs☆65Updated 7 months ago
- archiso installer customized for the X13s laptop and Windows Dev Kit 2023☆66Updated last month
- Fork of ollama for vulkan support☆20Updated 7 months ago