eleiton / ollama-intel-arcLinks
Make use of Intel Arc Series GPU to Run Ollama, StableDiffusion, Whisper and Open WebUI, for image generation, speech recognition and interaction with Large Language Models (LLM).
☆77Updated last month
Alternatives and similar repositories for ollama-intel-arc
Users that are interested in ollama-intel-arc are comparing it to the libraries listed below
Sorting:
- ☆251Updated last month
- AMD APU compatible Ollama. Get up and running with Llama 3.3, DeepSeek-R1, Phi-4, Gemma 3, Mistral Small 3.1 and other large language mod…☆63Updated this week
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆209Updated 4 months ago
- Stable Diffusion v1.7.0 & v1.9.3 & v1.10.1 on RDNA2 RDNA3 AMD ROCm with Docker-compose☆13Updated 8 months ago
- ☆222Updated last year
- ROCm docker images with fixes/support for legecy architecture gfx803. eg.Radeon RX 590/RX 580/RX 570/RX 480☆70Updated last month
- General Site for the GFX803 ROCm Stuff☆89Updated 3 weeks ago
- AI chatbot for Matrix with infinite personalties, using ollama☆48Updated last week
- A script that automatically activates ASPM for all supported devices on Linux☆290Updated 7 months ago
- Lightweight Inference server for OpenVINO☆188Updated this week
- Open WebUI Client for Android is a mobile app for using Open WebUI interfaces with local or remote AI models.☆73Updated 2 months ago
- ☆56Updated 2 years ago
- This is a step-by-step guide to enable Gen 12/13 Intel vGPU using SR-IOV Technology so up to 7 Client VMs can enjoy hardware GPU decoding☆247Updated 7 months ago
- This is a one-click install script to enable Gen 12/13 Intel vGPU using SR-IOV Technology so up to 7 Client VMs can enjoy hardware GPU de…☆70Updated 7 months ago
- AI Inferencing at the Edge. A simple one-file way to run various GGML models with KoboldAI's UI with AMD ROCm offloading☆647Updated 2 weeks ago
- A manual for helping using tesla p40 gpu☆126Updated 8 months ago
- Wake-on-LAN for libvirt based VMs☆51Updated last year
- Faster whisper Running on AMD GPUs with modified CTranslate 2 Libraries served up with Wyoming protocol☆23Updated 11 months ago
- Ollama with intel (i)GPU acceleration in docker and benchmark☆15Updated last week
- ☆356Updated 3 months ago
- Make PyTorch models at least run on APUs.☆54Updated last year
- A script to monitor Intel ARC GPUs on Linux☆18Updated 2 years ago
- Prometheus exporter for Linux based GDDR6/GDDR6X VRAM and GPU Core Hot spot temperature reader for NVIDIA RTX 3000/4000 series GPUs.☆21Updated 9 months ago
- ☆52Updated last year
- A collection of Dockerized games and apps like Steam, Firefox and Retroarch☆556Updated this week
- llama.cpp + ROCm + llama-swap☆21Updated 5 months ago
- Docker compose and other useful files for selfhosting of Firefox sync server☆138Updated last year
- Add genai backend for ollama to run generative AI models using OpenVINO Runtime.☆10Updated 3 weeks ago
- Install Immich in LXC with optional CUDA support☆186Updated 2 weeks ago
- Persistent Linux 'jails' on TrueNAS SCALE to install software (k3s, docker, portainer, podman, etc.) with full access to all files via bi…☆582Updated 8 months ago