mattcurf / ollama-intel-gpuLinks
☆252Updated 2 months ago
Alternatives and similar repositories for ollama-intel-gpu
Users that are interested in ollama-intel-gpu are comparing it to the libraries listed below
Sorting:
- Make use of Intel Arc Series GPU to Run Ollama, StableDiffusion, Whisper and Open WebUI, for image generation, speech recognition and int…☆87Updated last month
- ☆227Updated last year
- Lightweight Inference server for OpenVINO☆191Updated 2 weeks ago
- Ollama with intel (i)GPU acceleration in docker and benchmark☆18Updated last week
- This is a step-by-step guide to enable Gen 12/13 Intel vGPU using SR-IOV Technology so up to 7 Client VMs can enjoy hardware GPU decoding☆258Updated 8 months ago
- Automatically scale virtual machines resources on Proxmox hosts☆257Updated 3 weeks ago
- Automatically scale LXC containers resources on Proxmox hosts☆208Updated 4 months ago
- Is a comprehensive and versatile Bash script designed to simplify and optimize the configuration and management of Proxmox Virtual Enviro…☆415Updated last month
- ☆52Updated last year
- AMD APU compatible Ollama. Get up and running with Llama 3.3, DeepSeek-R1, Phi-4, Gemma 3, Mistral Small 3.1 and other large language mod…☆68Updated this week
- Get the Ryzen processors with AMD Radeon 680M/780M integrated graphics or RDNA2/RDNA3 GPUs running with Proxmox, GPU passthrough and UEFI…☆789Updated 2 weeks ago
- A simple GUI for configuring traefik routes☆112Updated 2 months ago
- Model swapping for llama.cpp (or any local OpenAPI compatible server)☆1,138Updated this week
- Vanilla Arch modified into SteamOS with web based Desktop access, useful for remote play and lower end games☆172Updated 6 months ago
- CCPVE - Scripts for management and task automation in Proxmox VE☆469Updated 3 weeks ago
- A windows desktop client for Proxmox.☆141Updated 7 months ago
- Linux distro for AI computers. Go from bare-metal GPUs to running AI workloads - like vLLM, SGLang, RAG, and Agents - in minutes, fully a…☆234Updated 2 weeks ago
- Lemonade helps users run local LLMs with the highest performance by configuring state-of-the-art inference engines for their NPUs and GPU…☆381Updated this week
- LLM Benchmark for Throughput via Ollama (Local LLMs)☆269Updated last month
- A script that automatically activates ASPM for all supported devices on Linux☆314Updated 7 months ago
- ☆176Updated this week
- Incus Helper-Scripts☆141Updated 2 months ago
- Caddy Docker image with Cloudflare DNS module☆157Updated 2 months ago
- Search the web and your self-hosted apps using local AI agents.☆458Updated 8 months ago
- LXD Graphical Web Console☆344Updated 2 months ago
- This is a one-click install script to enable Gen 12/13 Intel vGPU using SR-IOV Technology so up to 7 Client VMs can enjoy hardware GPU de…☆71Updated 8 months ago
- API up your Ollama Server.☆166Updated last month
- An OpenAI API compatible text to speech server using Coqui AI's xtts_v2 and/or piper tts as the backend.☆797Updated 6 months ago
- Kasm Workspaces platform provides enterprise-class orchestration, data loss prevention, and web streaming technology to enable the delive…☆406Updated last week
- A proxy server for multiple ollama instances with Key security☆470Updated last week