eleiton / ollama-intel-arcLinks
Make use of Intel Arc Series GPU to Run Ollama, StableDiffusion and Open WebUI, for image generation and interaction with Large Language Models (LLM).
☆32Updated 2 weeks ago
Alternatives and similar repositories for ollama-intel-arc
Users that are interested in ollama-intel-arc are comparing it to the libraries listed below
Sorting:
- ☆247Updated last month
- Lightweight Inference server for OpenVINO☆180Updated this week
- A daemon that automatically manages the performance states of NVIDIA GPUs.☆86Updated last month
- A library and CLI utilities for managing performance states of NVIDIA GPUs.☆26Updated 8 months ago
- Ollama with intel (i)GPU acceleration in docker and benchmark☆14Updated this week
- AMD APU compatible Ollama. Get up and running with Llama 3.3, DeepSeek-R1, Phi-4, Gemma 3, Mistral Small 3.1 and other large language mod…☆42Updated this week
- Faster whisper Running on AMD GPUs with modified CTranslate 2 Libraries served up with Wyoming protocol☆22Updated 9 months ago
- Add genai backend for ollama to run generative AI models using OpenVINO Runtime.☆9Updated last week
- Stable Diffusion v1.7.0 & v1.9.3 & v1.10.1 on RDNA2 RDNA3 AMD ROCm with Docker-compose☆12Updated 7 months ago
- GPU Power and Performance Manager☆59Updated 7 months ago
- General Site for the GFX803 ROCm Stuff☆71Updated this week
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆204Updated 3 months ago
- ☆52Updated last year
- Belullama is a comprehensive AI application that bundles Ollama, Open WebUI, and Automatic1111 (Stable Diffusion WebUI) into a single, ea…☆167Updated last week
- ZFS management application (GUI/WEB-UI) for Linux. Provides both Desktop GUI and Web UI interfaces. Simplifies common ZFS administration…☆23Updated last month
- Prometheus exporter for Linux based GDDR6/GDDR6X VRAM and GPU Core Hot spot temperature reader for NVIDIA RTX 3000/4000 series GPUs.☆19Updated 8 months ago
- This is a one-click install script to enable Gen 12/13 Intel vGPU using SR-IOV Technology so up to 7 Client VMs can enjoy hardware GPU de…☆68Updated 6 months ago
- ☆326Updated 2 months ago
- ☆56Updated 2 years ago
- Make PyTorch models at least run on APUs.☆55Updated last year
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆385Updated this week
- A fork of vLLM enabling Pascal architecture GPUs☆28Updated 3 months ago
- AI Inferencing at the Edge. A simple one-file way to run various GGML models with KoboldAI's UI with AMD ROCm offloading☆624Updated last week
- LocalAI integration component for Home Assistant☆42Updated last year
- LLM Benchmark for Throughput via Ollama (Local LLMs)☆231Updated this week
- Fork of ollama for vulkan support☆81Updated 3 months ago
- The ultimate developer workstation, based on Bluefin with KDE, now incorporated into Bluefin.☆75Updated 9 months ago
- ☆198Updated last month
- llama.cpp + ROCm + llama-swap☆19Updated 4 months ago
- ☆75Updated this week