Mobile-Artificial-Intelligence / maid
Maid is a cross-platform Flutter app for interfacing with GGUF / llama.cpp models locally, and with Ollama and OpenAI models remotely.
☆2,039Updated 2 weeks ago
Alternatives and similar repositories for maid
Users that are interested in maid are comparing it to the libraries listed below
Sorting:
- Simple frontend for LLMs built in react-native.☆1,420Updated this week
- A modern and easy-to-use client for Ollama☆1,254Updated last month
- A mobile Implementation of llama.cpp☆311Updated last year
- IRIS is an android app for interfacing with GGUF / llama.cpp models locally.☆203Updated 3 months ago
- Stable Diffusion AI client app for Android☆906Updated this week
- Running any GGUF SLMs/LLMs locally, on-device in Android☆291Updated this week
- Amica is an open source interface for interactive communication with 3D characters with voice synthesis and speech recognition.☆949Updated this week
- dart binding for llama.cpp☆227Updated last month
- ☆241Updated last week
- Compare open-source local LLM inference projects by their metrics to assess popularity and activeness.☆562Updated this week
- LM Studio TypeScript SDK☆1,086Updated this week
- An app that brings language models directly to your phone.☆3,465Updated last week
- llama.cpp for Flutter☆163Updated last week
- ☆1,809Updated last week
- Local AI API Platform☆2,655Updated last week
- AubAI brings you on-device gen-AI capabilities, including offline text generation and more, directly within your app.☆283Updated last year
- An OpenAI API compatible text to speech server using Coqui AI's xtts_v2 and/or piper tts as the backend.☆757Updated 3 months ago
- An awesome repository of local AI tools☆1,534Updated 6 months ago
- A proxy server for multiple ollama instances with Key security☆426Updated last month
- the terminal client for Ollama☆1,832Updated this week
- The official API server for Exllama. OAI compatible, lightweight, and fast.☆944Updated this week
- Pipelines: Versatile, UI-Agnostic OpenAI-Compatible Plugin Framework☆1,780Updated 3 weeks ago
- llama and other large language models on iOS and MacOS offline using GGML library.☆1,763Updated 2 months ago
- Connect home devices into a powerful cluster to accelerate LLM inference. More devices means faster inference.☆2,059Updated 2 weeks ago
- A multi-platform desktop application to evaluate and compare LLM models, written in Rust and React.☆730Updated 3 weeks ago
- Web UI for ExLlamaV2☆496Updated 3 months ago
- plug whisper audio transcription to a local ollama server and ouput tts audio responses☆324Updated last year
- Local voice chatbot for engaging conversations, powered by Ollama, Hugging Face Transformers, and Coqui TTS Toolkit☆765Updated 9 months ago
- An AI assistant beyond the chat box.☆328Updated last year
- Minimalist web-searching platform with an AI assistant that runs directly from your browser. Uses WebLLM, Wllama and SearXNG. Demo: https…☆402Updated this week