Picovoice / picollmLinks
On-device LLM Inference Powered by X-Bit Quantization
☆278Updated last week
Alternatives and similar repositories for picollm
Users that are interested in picollm are comparing it to the libraries listed below
Sorting:
- Recipes for on-device voice AI and local LLM☆104Updated last week
- VSCode AI coding assistant powered by self-hosted llama.cpp endpoint.☆183Updated last year
- On-device streaming text-to-speech engine powered by deep learning☆127Updated last week
- 1.58 Bit LLM on Apple Silicon using MLX☆242Updated last year
- Open source LLM UI, compatible with all local LLM providers.☆177Updated last year
- Locally running LLM with internet access☆97Updated 7 months ago
- API Server for Transformer Lab☆83Updated 2 months ago
- ☆786Updated this week
- Awesome Mobile LLMs☆301Updated 2 months ago
- Replace OpenAI with Llama.cpp Automagically.☆328Updated last year
- FastMLX is a high performance production ready API to host MLX models.☆341Updated 10 months ago
- Fast Streaming TTS with Orpheus + WebRTC (with FastRTC)☆347Updated 9 months ago
- Run LLMs in the Browser with MLC / WebLLM ✨☆150Updated last year
- Setup and run a local LLM and Chatbot using consumer grade hardware.☆310Updated 2 months ago
- A fully in-browser privacy solution to make Conversational AI privacy-friendly☆234Updated last year
- ☆95Updated last year
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆284Updated 7 months ago
- Self-host LLMs with vLLM and BentoML☆168Updated last week
- An application for running LLMs locally on your device, with your documents, facilitating detailed citations in generated responses.☆629Updated last year
- Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon☆273Updated 2 months ago
- Open source repo for AI in a Box.☆71Updated last year
- Local ML voice chat using high-end models.☆181Updated last month
- A mobile Implementation of llama.cpp☆326Updated 2 years ago
- plug whisper audio transcription to a local ollama server and ouput tts audio responses☆367Updated 3 months ago
- Something similar to Apple Intelligence?☆60Updated last year
- ☆109Updated 5 months ago
- ☆209Updated 3 weeks ago
- WebAssembly (Wasm) Build and Bindings for llama.cpp☆285Updated last year
- ☆135Updated last month
- Running a LLM on the ESP32☆87Updated last year