google-ai-edge / LiteRT-LMLinks
☆260Updated last week
Alternatives and similar repositories for LiteRT-LM
Users that are interested in LiteRT-LM are comparing it to the libraries listed below
Sorting:
- Inference, Fine Tuning and many more recipes with Gemma family of models☆223Updated last week
- Fast Streaming TTS with Orpheus + WebRTC (with FastRTC)☆298Updated 3 months ago
- TPI-LLM: Serving 70b-scale LLMs Efficiently on Low-resource Edge Devices☆185Updated last month
- Train Large Language Models on MLX.☆126Updated this week
- ☆101Updated 10 months ago
- Blazing fast whisper turbo for ASR (speech-to-text) tasks☆212Updated 8 months ago
- ☆206Updated 5 months ago
- MLX-Embeddings is the best package for running Vision and Language Embedding models locally on your Mac using MLX.☆179Updated last month
- A flexible, adaptive classification system for dynamic text classification☆336Updated 3 weeks ago
- Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon☆271Updated 10 months ago
- Examples on how to use various LLM providers with a Wine Classification problem☆96Updated 3 weeks ago
- FastMLX is a high performance production ready API to host MLX models.☆313Updated 3 months ago
- Gemma 2 optimized for your local machine.☆376Updated 11 months ago
- ☆95Updated 6 months ago
- Solving data for LLMs - Create quality synthetic datasets!☆150Updated 5 months ago
- 1.58 Bit LLM on Apple Silicon using MLX☆214Updated last year
- Distributed Inference for mlx LLm☆93Updated 11 months ago
- ☆101Updated last month
- The Open Deep Research app – generate reports with OSS LLMs☆261Updated 2 weeks ago
- Realtime Voice and Vision wtih Brilliant Labs Frame and Gemini☆58Updated 2 months ago
- Sparse Inferencing for transformer based LLMs☆193Updated this week
- Command-line personal assistant using your favorite proprietary or local models with access to over 30+ tools☆110Updated 2 weeks ago
- ☆171Updated 11 months ago
- SmolVLM2 Demo☆159Updated 3 months ago
- MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically designed for Apple Silicon (M-series) chips. I…☆437Updated last week
- 📋 NotebookMLX - An Open Source version of NotebookLM (Ported NotebookLlama)☆303Updated 4 months ago
- Kyutai with an "eye"☆207Updated 3 months ago
- Testing and evaluating the capabilities of Vision-Language models (PaliGemma) in performing computer vision tasks such as object detectio…☆81Updated last year
- Fast parallel LLM inference for MLX☆198Updated last year
- Using the moondream VLM with optical flow for promptable object tracking☆68Updated 4 months ago