Picovoice / picollmLinks
On-device LLM Inference Powered by X-Bit Quantization
☆265Updated 3 weeks ago
Alternatives and similar repositories for picollm
Users that are interested in picollm are comparing it to the libraries listed below
Sorting:
- Recipes for on-device voice AI and local LLM☆94Updated 2 months ago
- On-device streaming text-to-speech engine powered by deep learning☆120Updated 3 weeks ago
- VSCode AI coding assistant powered by self-hosted llama.cpp endpoint.☆183Updated 7 months ago
- Replace OpenAI with Llama.cpp Automagically.☆324Updated last year
- 1.58 Bit LLM on Apple Silicon using MLX☆221Updated last year
- Local ML voice chat using high-end models.☆175Updated last week
- ☆311Updated this week
- WebAssembly (Wasm) Build and Bindings for llama.cpp☆278Updated last year
- Run LLMs in the Browser with MLC / WebLLM ✨☆140Updated 10 months ago
- A platform to self-host AI on easy mode☆159Updated 2 weeks ago
- Using FastChat-T5 Large Language Model, Vosk API for automatic speech recognition, and Piper for text-to-speech☆124Updated 2 years ago
- A fast batching API to serve LLM models☆187Updated last year
- Open source LLM UI, compatible with all local LLM providers.☆174Updated 11 months ago
- Official implementation of "WhisperNER: Unified Open Named Entity and Speech Recognition"☆196Updated 6 months ago
- Self-host LLMs with vLLM and BentoML☆140Updated this week
- FastMLX is a high performance production ready API to host MLX models.☆325Updated 5 months ago
- Something similar to Apple Intelligence?☆61Updated last year
- Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon☆272Updated 11 months ago
- API Server for Transformer Lab☆72Updated this week
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆278Updated 2 months ago
- ☆91Updated 3 months ago
- Vercel and web-llm template to run wasm models directly in the browser.☆160Updated last year
- Blazing fast whisper turbo for ASR (speech-to-text) tasks☆214Updated 10 months ago
- Distributed Inference for mlx LLm☆93Updated last year
- ☆132Updated 4 months ago
- The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM …☆586Updated 6 months ago
- Fast Streaming TTS with Orpheus + WebRTC (with FastRTC)☆304Updated 4 months ago
- An OpenAI API compatible API for chat with image input and questions about the images. aka Multimodal.☆259Updated 5 months ago
- A simple experiment on letting two local LLM have a conversation about anything!☆110Updated last year
- Start a server from the MLX library.☆191Updated last year