apeatling / ollama-voice-macLinks
Mac compatible Ollama Voice
☆509Updated 2 months ago
Alternatives and similar repositories for ollama-voice-mac
Users that are interested in ollama-voice-mac are comparing it to the libraries listed below
Sorting:
- plug whisper audio transcription to a local ollama server and ouput tts audio responses☆357Updated last month
- From anywhere you can type, query and stream the output of any script (e.g. an LLM)☆501Updated last year
- Link you Ollama models to LM-Studio☆145Updated last year
- llama.cpp with BakLLaVA model describes what does it see☆381Updated 2 years ago
- Implementation of F5-TTS in MLX☆594Updated 8 months ago
- Local semantic search. Stupidly simple.☆436Updated last year
- The easiest way to run the fastest MLX-based LLMs locally☆305Updated last year
- AlwaysReddy is a LLM voice assistant that is always just a hotkey away.☆760Updated 8 months ago
- Use locally running LLMs directly from Siri 🦙🟣☆183Updated last year
- An extremely fast implementation of whisper optimized for Apple Silicon using MLX.☆808Updated last year
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆281Updated 5 months ago
- Local AI talk with a custom voice based on Zephyr 7B model. Uses RealtimeSTT with faster_whisper for transcription and RealtimeTTS with C…☆695Updated 5 months ago
- Plugin that lets you ask questions about your documents including audio and video files.☆357Updated 2 weeks ago
- llama.cpp based AI chat app for macOS☆498Updated 11 months ago
- Blazing fast whisper turbo for ASR (speech-to-text) tasks☆218Updated this week
- Chat with your documents using local AI☆1,074Updated last year
- Efficient visual programming for AI language models☆362Updated 6 months ago
- The fastest Whisper optimization for automatic speech recognition as a command-line interface ⚡️☆382Updated last year
- FastMLX is a high performance production ready API to host MLX models.☆332Updated 8 months ago
- A simple UI / Web / Frontend for MLX mlx-lm using Streamlit.☆261Updated 3 weeks ago
- Generate imagined websites on an infinite canvas☆612Updated 5 months ago
- Your Trusty Memory-enabled AI Companion - Simple RAG chatbot optimized for local LLMs | 12 Languages Supported | OpenAI API Compatible☆342Updated 8 months ago
- A Function Calls Proxy for Groq, the fastest AI alive!☆205Updated last year
- ☆285Updated last year
- LLMX; Easiest 3rd party Local LLM UI for the web!☆280Updated 2 weeks ago
- Dagger functions to import Hugging Face GGUF models into a local ollama instance and optionally push them to ollama.com.☆119Updated last year
- An AI assistant beyond the chat box.☆328Updated last year
- A GUI interface for Ollama☆338Updated last year
- ☆275Updated last year
- Low latency ai companion voice talk in 60 lines of code using faster_whisper and elevenlabs input streaming☆309Updated 5 months ago