mattcurf / ollama-intel-gpuLinks
☆251Updated 3 months ago
Alternatives and similar repositories for ollama-intel-gpu
Users that are interested in ollama-intel-gpu are comparing it to the libraries listed below
Sorting:
- Make use of Intel Arc Series GPU to Run Ollama, StableDiffusion, Whisper and Open WebUI, for image generation, speech recognition and int…☆136Updated 3 months ago
- AMD APU compatible Ollama. Get up and running with Llama 3.3, DeepSeek-R1, Phi-4, Gemma 3, Mistral Small 3.1 and other large language mod…☆96Updated this week
- This is a step-by-step guide to enable Gen 12/13 Intel vGPU using SR-IOV Technology so up to 7 Client VMs can enjoy hardware GPU decoding☆271Updated 10 months ago
- ☆235Updated last year
- Open WebUI Client for Android is a mobile app for using Open WebUI interfaces with local or remote AI models.☆105Updated last month
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆210Updated 6 months ago
- Lightweight Inference server for OpenVINO☆211Updated this week
- Native mobile client for Open‑WebUI. Chat with your self‑hosted AI.☆387Updated last week
- A script that automatically activates ASPM for all supported devices on Linux☆344Updated 9 months ago
- ZFS management application (GUI/WEB-UI) for Linux. Provides both Desktop GUI and Web UI interfaces. Simplifies common ZFS administration…☆60Updated 2 months ago
- A simple GUI for configuring traefik routes☆118Updated 3 weeks ago
- API up your Ollama Server.☆177Updated 3 months ago
- ☆53Updated last year
- This is a one-click install script to enable Gen 12/13 Intel vGPU using SR-IOV Technology so up to 7 Client VMs can enjoy hardware GPU de…☆73Updated 10 months ago
- Automatically scale LXC containers resources on Proxmox hosts☆208Updated 6 months ago
- Caddy Docker image with Cloudflare DNS module☆174Updated 3 weeks ago
- Ollama with intel (i)GPU acceleration in docker and benchmark☆22Updated last week
- Automatically optimize files uploaded to Immich in order to save storage space☆167Updated 2 months ago
- ☆74Updated this week
- Run LLMs on AMD Ryzen™ AI NPUs. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.☆194Updated this week
- Gitea Mirror auto-syncs GitHub repos to your self-hosted Gitea, with a sleek Web UI and easy Docker deployment.☆313Updated this week
- Is a comprehensive and versatile Bash script designed to simplify and optimize the configuration and management of Proxmox Virtual Enviro…☆579Updated this week
- Linux distro for AI computers. Go from bare-metal GPUs to running AI workloads - like vLLM, SGLang, RAG, and Agents - in minutes, fully a…☆277Updated 2 weeks ago
- A drop-in replacement for portainer/portainer-ce, without annoying UI elements or tracking script☆169Updated last year
- LLM Benchmark for Throughput via Ollama (Local LLMs)☆291Updated last month
- ☆183Updated this week
- ☆226Updated last week
- Model swapping for llama.cpp (or any local OpenAI API compatible server)☆1,499Updated last week
- A tunneling client for Pangolin☆486Updated last week
- Web interface for Network UPS Tools☆208Updated last week