RoyalCities / RC-Home-Assistant-Low-VRAMLinks
Local AI voice assistant stack for Home Assistant (GPU-accelerated) with persistent memory, follow-up conversation, and Ollama model recommendations - settings designed for low VRAM systems.
☆206Updated 2 months ago
Alternatives and similar repositories for RC-Home-Assistant-Low-VRAM
Users that are interested in RC-Home-Assistant-Low-VRAM are comparing it to the libraries listed below
Sorting:
- ☆187Updated 6 months ago
- ☆165Updated last month
- A lightweight recreation of OS1/Samantha from the movie Her, running locally in the browser☆108Updated 3 months ago
- ☆178Updated 3 weeks ago
- The PyVisionAI Official Repo☆105Updated 2 months ago
- llmbasedos — Local-First OS Where Your AI Agents Wake Up and Work☆279Updated last month
- 🗣️ Real‑time, low‑latency voice, vision, and conversational‑memory AI assistant built on LiveKit and local LLMs ✨☆95Updated 3 months ago
- Notate is a desktop chat application that takes AI conversations to the next level. It combines the simplicity of chat with advanced feat…☆257Updated 7 months ago
- OLLama IMage CAtegorizer☆69Updated 8 months ago
- Agent MCP for ffmpeg☆207Updated 4 months ago
- A multi-agent AI architecture that connects 25+ specialized agents through n8n and MCP servers. Project NOVA routes requests to domain-sp…☆221Updated 3 months ago
- A web application that converts speech to speech 100% private☆76Updated 4 months ago
- Local LLM Powered Recursive Search & Smart Knowledge Explorer☆255Updated 7 months ago
- AI creative coding studio Deepresearch , blogs , Animation all in browser full privacy.☆66Updated 3 weeks ago
- CoexistAI is a modular, developer-friendly research assistant framework . It enables you to build, search, summarize, and automate resear…☆300Updated this week
- VLLM Port of the Chatterbox TTS model☆306Updated 3 weeks ago
- Curated list of tools, frameworks, and resources for running, building, and deploying AI privately — on-prem, air-gapped, or self-hosted.☆146Updated 3 weeks ago
- BUDDIE is the first full-stack open-source AI voice interaction solution, providing a complete end-to-end system from hardware design to …☆155Updated last month
- ☆224Updated 4 months ago
- Lightweight & fast AI inference proxy for self-hosted LLMs backends like Ollama, LM Studio and others. Designed for speed, simplicity and…☆92Updated last week
- Welcome!☆140Updated 9 months ago
- Cascading voice assistant combining real-time speech recognition, AI reasoning, and neural text-to-speech capabilities.☆121Updated 3 weeks ago
- ☆83Updated 7 months ago
- Give your local LLM a real memory with a lightweight, fully local memory system. 100% offline and under your control.☆58Updated 2 weeks ago
- the IDE for research, built from the ground up with AI integrations☆115Updated last week
- Speech-to-speech AI assistant with natural conversation flow, mid-speech interruption, vision capabilities and AI-initiated follow-ups. F…☆242Updated 5 months ago
- LLM search engine faster than perplexity!☆359Updated last month
- Realtime tts reading of large textfiles by your favourite voice. +Translation via LLM (Python script)☆52Updated 11 months ago
- ☆133Updated 3 months ago
- Run Orpheus 3B Locally With LM Studio☆31Updated 6 months ago