sammcj / homeassistant-localaiLinks
LocalAI integration component for Home Assistant
☆42Updated last year
Alternatives and similar repositories for homeassistant-localai
Users that are interested in homeassistant-localai are comparing it to the libraries listed below
Sorting:
- Conversation support for home assistant using vicuna local llm☆89Updated 10 months ago
- Custom TTS Integration using ElevenLabs API☆97Updated 7 months ago
- Ollama conversation integration for Home Assistant☆147Updated 11 months ago
- LLM Chat is an open-source serverless alternative to ChatGPT.☆34Updated 8 months ago
- LlamaCards is a web application that provides a dynamic interface for interacting with LLM models in real-time. This app allows users to …☆39Updated 9 months ago
- Local character AI chatbot with chroma vector store memory and some scripts to process documents for Chroma☆34Updated 7 months ago
- Wyoming protocol server that calls an external program to play audio☆16Updated last year
- AI powered Chatbot with real time updates.☆55Updated 7 months ago
- ☆16Updated last month
- ☆25Updated last month
- Personal assistant that can run commands at will☆97Updated last year
- An accessible home server platform for AI software☆16Updated 4 months ago
- A QT GUI for large language models☆34Updated last year
- "Pacha" TUI (Text User Interface) is a JavaScript application that utilizes the "blessed" library. It serves as a frontend for llama.cpp …☆35Updated last year
- Local LLM inference & management server with built-in OpenAI API☆31Updated last year
- Attend - to what matters.☆15Updated 3 months ago
- A Voice Assistant in your Browser.☆20Updated 3 weeks ago
- ☆46Updated 2 weeks ago
- AI Persona in JSON file☆17Updated 5 months ago
- Open Voice OS container images and docker-compose.yml files for x86_64 and aarch64 CPU architectures.☆47Updated last week
- 100% free, local & offline voice assistant with speech recognition☆72Updated 7 months ago
- ☆14Updated 9 months ago
- Simple system tray application to monitor the status of your LLM models running on Ollama☆19Updated 5 months ago
- Use Porcupine wake word to trigger a voice assistant pipeline in Home-Assistant☆30Updated last year
- A conversational UI for chatbots using the llama.cpp server☆14Updated last week
- Docker images and configuration to run text-generation-webui with GPU or CPU support☆29Updated last year
- Chat with your pdf using your local LLM, OLLAMA client.(incomplete)☆37Updated 7 months ago
- ☆22Updated 9 months ago
- Extension for VSCode for Ollama☆25Updated last year
- IRIS: Demonstrator for use of LLMs in python (outdated)☆62Updated 2 months ago