Picovoice / picollmLinks
On-device LLM Inference Powered by X-Bit Quantization
☆268Updated last month
Alternatives and similar repositories for picollm
Users that are interested in picollm are comparing it to the libraries listed below
Sorting:
- Recipes for on-device voice AI and local LLM☆98Updated 3 months ago
- ☆338Updated this week
- On-device streaming text-to-speech engine powered by deep learning☆120Updated 2 weeks ago
- WebAssembly (Wasm) Build and Bindings for llama.cpp☆280Updated last year
- Local ML voice chat using high-end models.☆175Updated last month
- VSCode AI coding assistant powered by self-hosted llama.cpp endpoint.☆183Updated 7 months ago
- Replace OpenAI with Llama.cpp Automagically.☆326Updated last year
- Open source LLM UI, compatible with all local LLM providers.☆174Updated last year
- Using FastChat-T5 Large Language Model, Vosk API for automatic speech recognition, and Piper for text-to-speech☆125Updated 2 years ago
- ☆91Updated 4 months ago
- Locally running LLM with internet access☆96Updated 2 months ago
- Nginx proxy server in a Docker container to Authenticate & Proxy requests to Ollama from Public Internet via Cloudflare Tunnel☆140Updated 3 weeks ago
- 1.58 Bit LLM on Apple Silicon using MLX☆223Updated last year
- Setup and run a local LLM and Chatbot using consumer grade hardware.☆289Updated this week
- A simple experiment on letting two local LLM have a conversation about anything!☆112Updated last year
- ☆132Updated 4 months ago
- Running a LLM on the ESP32☆78Updated 11 months ago
- WebAssembly binding for llama.cpp - Enabling on-browser LLM inference☆890Updated 3 weeks ago
- plug whisper audio transcription to a local ollama server and ouput tts audio responses☆351Updated last year
- Fast Streaming TTS with Orpheus + WebRTC (with FastRTC)☆306Updated 5 months ago
- Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon☆272Updated last year
- React Native binding of llama.cpp☆39Updated this week
- Something similar to Apple Intelligence?☆61Updated last year
- A platform to self-host AI on easy mode☆167Updated last week
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆280Updated 3 months ago
- A fast batching API to serve LLM models☆187Updated last year
- Run LLMs in the Browser with MLC / WebLLM ✨☆143Updated 11 months ago
- A mobile Implementation of llama.cpp☆320Updated last year
- A fully in-browser privacy solution to make Conversational AI privacy-friendly☆229Updated 11 months ago
- EntityDB is an in-browser vector database wrapping indexedDB and Transformers.js over WebAssembly☆216Updated 4 months ago