SearchSavior / OpenArcLinks
Lightweight Inference server for OpenVINO
☆210Updated last week
Alternatives and similar repositories for OpenArc
Users that are interested in OpenArc are comparing it to the libraries listed below
Sorting:
- A platform to self-host AI on easy mode☆163Updated this week
- InferX is a Inference Function as a Service Platform☆132Updated last week
- Run LLMs on AMD Ryzen™ AI NPUs. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.☆179Updated last week
- Run multiple resource-heavy Large Models (LM) on the same machine with limited amount of VRAM/other resources by exposing them on differe…☆82Updated this week
- ☆83Updated last week
- Sparse Inferencing for transformer based LLMs☆197Updated last month
- A web application that converts speech to speech 100% private☆75Updated 3 months ago
- Open source LLM UI, compatible with all local LLM providers.☆174Updated 11 months ago
- llmbasedos — Local-First OS Where Your AI Agents Wake Up and Work☆272Updated 3 weeks ago
- Lightweight & fast AI inference proxy for self-hosted LLMs backends like Ollama, LM Studio and others. Designed for speed, simplicity and…☆86Updated last week
- ☆176Updated last week
- ☆83Updated 6 months ago
- Privacy-first agentic framework with powerful reasoning & task automation capabilities. Natively distributed and fully ISO 27XXX complian…☆66Updated 5 months ago
- Enhancing LLMs with LoRA☆135Updated last week
- ☆165Updated last month
- This is a cross-platform desktop application that allows you to chat with locally hosted LLMs and enjoy features like MCP support☆224Updated last month
- Local LLM Powered Recursive Search & Smart Knowledge Explorer☆252Updated 7 months ago
- GPU Power and Performance Manager☆61Updated 11 months ago
- ☆209Updated last week
- Docs for GGUF quantization (unofficial)☆258Updated last month
- The Fastest Way to Fine-Tune LLMs Locally☆317Updated 5 months ago
- ☆223Updated 4 months ago
- Easy to use interface for the Whisper model optimized for all GPUs!☆294Updated last month
- No-code CLI designed for accelerating ONNX workflows☆214Updated 3 months ago
- A open webui function for better R1 experience☆79Updated 6 months ago
- Local Qwen3 LLM inference. One easy-to-understand file of C source with no dependencies.☆110Updated 2 months ago
- Eternal is an experimental platform for machine learning models and workflows.☆68Updated 6 months ago
- A local AI companion that uses a collection of free, open source AI models in order to create two virtual companions that will follow you…☆232Updated last month
- Code for Papeg.ai☆225Updated 8 months ago
- Llama.cpp runner/swapper and proxy that emulates LMStudio / Ollama backends☆44Updated 3 weeks ago