RoyalCities / RC-Home-Assistant-Low-VRAMLinks
Local AI voice assistant stack for Home Assistant (GPU-accelerated) with persistent memory, follow-up conversation, and Ollama model recommendations - settings designed for low VRAM systems.
☆225Updated 5 months ago
Alternatives and similar repositories for RC-Home-Assistant-Low-VRAM
Users that are interested in RC-Home-Assistant-Low-VRAM are comparing it to the libraries listed below
Sorting:
- ☆196Updated 9 months ago
- ☆204Updated 4 months ago
- ☆178Updated 5 months ago
- llmbasedos — Local-First OS Where Your AI Agents Wake Up and Work☆279Updated last week
- 🗣️ Real‑time, low‑latency voice, vision, and conversational‑memory AI assistant built on LiveKit and local LLMs ✨☆100Updated 6 months ago
- Agent MCP for ffmpeg☆212Updated 7 months ago
- OLLama IMage CAtegorizer☆70Updated last year
- This is a cross-platform desktop application that allows you to chat with locally hosted LLMs and enjoy features like MCP support☆225Updated 5 months ago
- Notate is a desktop chat application that takes AI conversations to the next level. It combines the simplicity of chat with advanced feat…☆263Updated 10 months ago
- A web application that converts speech to speech 100% private☆81Updated 7 months ago
- High-performance lightweight proxy and load balancer for LLM infrastructure. Intelligent routing, automatic failover and unified model di…☆126Updated 3 weeks ago
- Local LLM Powered Recursive Search & Smart Knowledge Explorer☆259Updated 2 months ago
- A multi-agent AI architecture that connects 25+ specialized agents through n8n and MCP servers. Project NOVA routes requests to domain-sp…☆253Updated 7 months ago
- Give your local LLM a real memory with a lightweight, fully local memory system. 100% offline and under your control.☆65Updated 3 months ago
- A sleek web interface for Ollama, making local LLM management and usage simple. WebOllama provides an intuitive UI to manage Ollama model…☆61Updated 3 months ago
- The PyVisionAI Official Repo☆108Updated 5 months ago
- Welcome!☆141Updated last year
- Plug-and-play memory for LLMs in 3 lines of code. Add persistent, intelligent, human-like memory and recall to any model in minutes.☆245Updated last month
- Fast local speech-to-text for any app using faster-whisper☆144Updated 3 months ago
- A lightweight recreation of OS1/Samantha from the movie Her, running locally in the browser☆112Updated 6 months ago
- Explore the unknown, build the future, own your data.☆225Updated this week
- open source assistant hybrid using small models (2b - 5b) and gemini , with image and agentic tool capabilities and integration of RAG…☆224Updated 3 months ago
- A lightweight UI for chatting with Ollama models. Streaming responses, conversation history, and multi-model support.☆147Updated 10 months ago
- 🚀 FlexLLama - Lightweight self-hosted tool for running multiple llama.cpp server instances with OpenAI v1 API compatibility and multi-GP…☆47Updated last month
- Run Orpheus 3B Locally With LM Studio☆32Updated 9 months ago
- ☆83Updated 10 months ago
- pdfLLM is a completely open source, proof of concept RAG app.☆182Updated 4 months ago
- ☆229Updated 8 months ago
- Command-line personal assistant using your favorite proprietary or local models with access to over 30+ tools☆111Updated 6 months ago
- Aggregates compute from spare GPU capacity☆184Updated last week