zhaohb / ollama_ovLinks
Add genai backend for ollama to run generative AI models using OpenVINO Runtime.
☆10Updated 3 weeks ago
Alternatives and similar repositories for ollama_ov
Users that are interested in ollama_ov are comparing it to the libraries listed below
Sorting:
- AMD Ryzen™ AI Software includes the tools and runtime libraries for optimizing and deploying AI inference on AMD Ryzen™ AI powered PCs.☆555Updated last week
- ☆251Updated last month
- AMD APU compatible Ollama. Get up and running with Llama 3.3, DeepSeek-R1, Phi-4, Gemma 3, Mistral Small 3.1 and other large language mod…☆63Updated this week
- Lightweight Inference server for OpenVINO☆188Updated this week
- No-code CLI designed for accelerating ONNX workflows☆201Updated last month
- ☆356Updated 3 months ago
- llama.cpp fork with additional SOTA quants and improved performance☆652Updated this week
- Make use of Intel Arc Series GPU to Run Ollama, StableDiffusion, Whisper and Open WebUI, for image generation, speech recognition and int…☆77Updated last month
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆209Updated 4 months ago
- General Site for the GFX803 ROCm Stuff☆89Updated 3 weeks ago
- vLLM for AMD gfx906 GPUs, e.g. Radeon VII / MI50 / MI60☆111Updated last week
- AI PC starter app for doing AI image creation, image stylizing, and chatbot on a PC powered by an Intel® Arc™ GPU.☆568Updated last week
- Intel® NPU (Neural Processing Unit) Driver☆281Updated last week
- An innovative library for efficient LLM inference via low-bit quantization☆349Updated 10 months ago
- Nextcloud AppAPI Docker Socket Proxy☆16Updated 3 months ago
- ☆72Updated 2 months ago
- Intel® NPU Acceleration Library☆679Updated 2 months ago
- Model swapping for llama.cpp (or any local OpenAPI compatible server)☆1,035Updated 2 weeks ago
- The main repository for building Pascal-compatible versions of ML applications and libraries.☆100Updated last month
- A Python package for extending the official PyTorch that can easily obtain performance on Intel platform☆1,907Updated this week
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆303Updated this week
- ☆430Updated this week
- Local LLM Server with GPU and NPU Acceleration☆206Updated this week
- OpenVINO Tokenizers extension☆37Updated this week
- ☆52Updated last year
- ✨ Nextcloud Assistant☆52Updated this week
- Tools for easier OpenVINO development/debugging☆9Updated 4 months ago
- A daemon that automatically manages the performance states of NVIDIA GPUs.☆89Updated last month
- btrfs snapraid auto sync☆15Updated last year
- llama.cpp + ROCm + llama-swap☆21Updated 5 months ago