RoyalCities / RC-Home-Assistant-Low-VRAMLinks
Local AI voice assistant stack for Home Assistant (GPU-accelerated) with persistent memory, follow-up conversation, and Ollama model recommendations - settings designed for low VRAM systems.
☆221Updated 4 months ago
Alternatives and similar repositories for RC-Home-Assistant-Low-VRAM
Users that are interested in RC-Home-Assistant-Low-VRAM are comparing it to the libraries listed below
Sorting:
- ☆195Updated 8 months ago
- The PyVisionAI Official Repo☆104Updated 5 months ago
- OLLama IMage CAtegorizer☆70Updated 11 months ago
- Notate is a desktop chat application that takes AI conversations to the next level. It combines the simplicity of chat with advanced feat…☆261Updated 10 months ago
- ☆176Updated 4 months ago
- Local LLM Powered Recursive Search & Smart Knowledge Explorer☆257Updated 2 months ago
- llmbasedos — Local-First OS Where Your AI Agents Wake Up and Work☆278Updated 4 months ago
- ☆200Updated 3 months ago
- 🗣️ Real‑time, low‑latency voice, vision, and conversational‑memory AI assistant built on LiveKit and local LLMs ✨☆100Updated 6 months ago
- A multi-agent AI architecture that connects 25+ specialized agents through n8n and MCP servers. Project NOVA routes requests to domain-sp…☆251Updated 6 months ago
- A lightweight recreation of OS1/Samantha from the movie Her, running locally in the browser☆112Updated 5 months ago
- the AI IDE for work, research, development, and play.☆220Updated this week
- BUDDIE is the first full-stack open-source AI voice interaction solution, providing a complete end-to-end system from hardware design to …☆234Updated 4 months ago
- A web application that converts speech to speech 100% private☆81Updated 6 months ago
- High-performance lightweight proxy and load balancer for LLM infrastructure. Intelligent routing, automatic failover and unified model di…☆123Updated this week
- This is a cross-platform desktop application that allows you to chat with locally hosted LLMs and enjoy features like MCP support☆226Updated 4 months ago
- Welcome!☆140Updated last year
- Plug-and-play memory for LLMs in 3 lines of code. Add persistent, intelligent, human-like memory and recall to any model in minutes.☆242Updated last month
- ☆94Updated 6 months ago
- A command-line personal assistant that integrates with Google Calendar, Gmail, and Tasks to help manage your digital life.☆128Updated 3 months ago
- Fast local speech-to-text for any app using faster-whisper☆146Updated 3 months ago
- pdfLLM is a completely open source, proof of concept RAG app.☆180Updated 3 months ago
- ☆228Updated 7 months ago
- Command-line personal assistant using your favorite proprietary or local models with access to over 30+ tools☆112Updated 5 months ago
- A persistent local memory for AI, LLMs, or Copilot in VS Code.☆182Updated last month
- A sleek web interface for Ollama, making local LLM management and usage simple. WebOllama provides an intuitive UI to manage Ollama model…☆59Updated 2 months ago
- LLM search engine faster than perplexity!☆369Updated 4 months ago
- Realtime tts reading of large textfiles by your favourite voice. +Translation via LLM (Python script)☆52Updated last year
- The easiest & fastest way to run LLMs in your home lab☆72Updated 3 weeks ago
- Generates breakthrough ideas from a single prompt through an 8 stage walkthrough, with optional research proposal paper.☆58Updated 2 months ago