RoyalCities / RC-Home-Assistant-Low-VRAMLinks
Local AI voice assistant stack for Home Assistant (GPU-accelerated) with persistent memory, follow-up conversation, and Ollama model recommendations - settings designed for low VRAM systems.
☆216Updated 3 months ago
Alternatives and similar repositories for RC-Home-Assistant-Low-VRAM
Users that are interested in RC-Home-Assistant-Low-VRAM are comparing it to the libraries listed below
Sorting:
- ☆190Updated 7 months ago
- ☆192Updated 2 months ago
- ☆173Updated 3 months ago
- llmbasedos — Local-First OS Where Your AI Agents Wake Up and Work☆277Updated 2 months ago
- OLLama IMage CAtegorizer☆70Updated 10 months ago
- Agent MCP for ffmpeg☆209Updated 5 months ago
- The PyVisionAI Official Repo☆104Updated 3 months ago
- This is a cross-platform desktop application that allows you to chat with locally hosted LLMs and enjoy features like MCP support☆225Updated 3 months ago
- A multi-agent AI architecture that connects 25+ specialized agents through n8n and MCP servers. Project NOVA routes requests to domain-sp…☆240Updated 5 months ago
- Notate is a desktop chat application that takes AI conversations to the next level. It combines the simplicity of chat with advanced feat…☆256Updated 8 months ago
- 🗣️ Real‑time, low‑latency voice, vision, and conversational‑memory AI assistant built on LiveKit and local LLMs ✨☆97Updated 4 months ago
- Local LLM Powered Recursive Search & Smart Knowledge Explorer☆255Updated 3 weeks ago
- the IDE for research, built from the ground up with AI integrations☆162Updated this week
- A web application that converts speech to speech 100% private☆80Updated 5 months ago
- Curated list of tools, frameworks, and resources for running, building, and deploying AI privately — on-prem, air-gapped, or self-hosted.☆150Updated 2 months ago
- A sleek web interface for Ollama, making local LLM management and usage simple. WebOllama provides an intuitive UI to manage Ollama model…☆59Updated last month
- A lightweight recreation of OS1/Samantha from the movie Her, running locally in the browser☆110Updated 4 months ago
- AI creative coding studio Deepresearch , blogs , Animation all in browser full privacy.☆67Updated last week
- LLM search engine faster than perplexity!☆364Updated 2 months ago
- Welcome!☆140Updated 11 months ago
- Command-line personal assistant using your favorite proprietary or local models with access to over 30+ tools☆112Updated 4 months ago
- High-performance lightweight proxy and load balancer for LLM infrastructure. Intelligent routing, automatic failover and unified model di…☆112Updated this week
- ☆226Updated 6 months ago
- Give your local LLM a real memory with a lightweight, fully local memory system. 100% offline and under your control.☆61Updated last month
- Fast local speech-to-text for any app using faster-whisper☆142Updated last month
- A simple to use python library for creating podcasts with support for many LLM and TTS providers☆76Updated 3 weeks ago
- Cascading voice assistant combining real-time speech recognition, AI reasoning, and neural text-to-speech capabilities.☆124Updated 2 months ago
- 🚀 FlexLLama - Lightweight self-hosted tool for running multiple llama.cpp server instances with OpenAI v1 API compatibility and multi-GP…☆40Updated last week
- Abbey is a self-hosted configurable AI interface with workspaces, document chats, YouTube chats, and more. Find our hosted version at htt…☆408Updated 6 months ago
- CoexistAI is a modular, developer-friendly research assistant framework . It enables you to build, search, summarize, and automate resear…☆353Updated 3 weeks ago