mattcurf / ollama-intel-gpuLinks
☆257Updated 7 months ago
Alternatives and similar repositories for ollama-intel-gpu
Users that are interested in ollama-intel-gpu are comparing it to the libraries listed below
Sorting:
- Make use of Intel Arc Series GPU to Run Ollama, StableDiffusion, Whisper and Open WebUI, for image generation, speech recognition and int…☆224Updated last month
- AMD APU compatible Ollama. Get up and running with OpenAI gpt-oss, DeepSeek-R1, Gemma 3 and other models.☆143Updated this week
- This is a step-by-step guide to enable Gen 12/13 Intel vGPU using SR-IOV Technology so up to 7 Client VMs can enjoy hardware GPU decoding☆315Updated last year
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS, Embedding and Rerank models over OpenAI endpoints.☆274Updated this week
- Run LLMs on AMD Ryzen™ AI NPUs in minutes. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.☆637Updated this week
- Interactive, locally hosted tool to migrate Open-WebUI SQLite databases to PostgreSQL☆190Updated 3 months ago
- ☆770Updated this week
- ☆186Updated 2 months ago
- API up your Ollama Server.☆191Updated 2 weeks ago
- Is a comprehensive and versatile Bash script designed to simplify and optimize the configuration and management of Proxmox Virtual Enviro…☆796Updated 2 weeks ago
- A daemon that automatically manages the performance states of NVIDIA GPUs.☆109Updated 2 months ago
- ☆246Updated last year
- A script that automatically activates ASPM for all supported devices on Linux. Moved to https://git.notthebe.ee/notthebee/AutoASPM☆421Updated 3 weeks ago
- ☆208Updated this week
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆216Updated last month
- This is a one-click install script to enable Gen 12/13 Intel vGPU using SR-IOV Technology so up to 7 Client VMs can enjoy hardware GPU de…☆80Updated last year
- Fresh builds of llama.cpp with AMD ROCm™ 7 acceleration☆164Updated this week
- Build your own distroless images with this mini file system and some binaries☆60Updated 2 weeks ago
- OpenAI-Compatible Proxy Middleware for the Wyoming Protocol☆131Updated last week
- Open WebUI Client for Android is a mobile app for using Open WebUI interfaces with local or remote AI models.☆136Updated 5 months ago
- Get the Ryzen processors with AMD Radeon 680M/780M integrated graphics or RDNA2/RDNA3 GPUs running with Proxmox, GPU passthrough and UEFI…☆1,068Updated last week
- Samba SMB server in a Docker container.☆595Updated last week
- Ollama with intel (i)GPU acceleration in docker and benchmark☆31Updated last week
- Persistent Linux 'jails' on TrueNAS SCALE to install software (k3s, docker, portainer, podman, etc.) with full access to all files via bi…☆585Updated last year
- ☆51Updated 2 years ago
- LLM Benchmark for Throughput via Ollama (Local LLMs)☆323Updated this week
- Native mobile client for OpenWebUI. Chat with your self‑hosted AI.☆921Updated last week
- Script for building Proxmox Backup Server 3.x (Bookworm) or 4.x (Trixie) for Armbian64☆332Updated 2 weeks ago
- Caddy Docker image with Cloudflare DNS module☆223Updated last month
- High-performance lightweight proxy and load balancer for LLM infrastructure. Intelligent routing, automatic failover and unified model di…☆127Updated this week