Blaizzy / mlx-vlmLinks
MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac using MLX.
☆1,583Updated this week
Alternatives and similar repositories for mlx-vlm
Users that are interested in mlx-vlm are comparing it to the libraries listed below
Sorting:
- Run LLMs with MLX☆1,721Updated this week
- LM Studio Apple MLX engine☆745Updated 2 weeks ago
- MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically designed for Apple Silicon (M-series) chips. I…☆531Updated this week
- A MLX port of FLUX based on the Huggingface Diffusers implementation.☆1,521Updated last week
- FastMLX is a high performance production ready API to host MLX models.☆324Updated 5 months ago
- An extremely fast implementation of whisper optimized for Apple Silicon using MLX.☆769Updated last year
- On-device Image Generation for Apple Silicon☆646Updated 4 months ago
- Large Language Models (LLMs) applications and tools running on Apple Silicon in real-time with Apple MLX.☆453Updated 6 months ago
- An implementation of the CSM(Conversation Speech Model) for Apple Silicon using MLX.☆372Updated last week
- Artificial Neural Engine Machine Learning Library☆1,136Updated last month
- An implementation of the Nvidia's Parakeet models for Apple Silicon using MLX.☆424Updated this week
- Making the community's best AI chat models available to everyone.☆1,978Updated 6 months ago
- Big & Small LLMs working together☆1,127Updated this week
- Implementation of F5-TTS in MLX☆574Updated 5 months ago
- Everything about the SmolLM and SmolVLM family of models☆3,130Updated last week
- 🤖✨ChatMLX is a modern, open-source, high-performance chat application for MacOS based on large language models.☆805Updated 5 months ago
- ☆370Updated 10 months ago
- Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon☆271Updated 11 months ago
- Fast State-of-the-Art Static Embeddings☆1,801Updated last week
- MLX-Embeddings is the best package for running Vision and Language Embedding models locally on your Mac using MLX.☆191Updated last week
- A text-to-speech (TTS), speech-to-text (STT) and speech-to-speech (STS) library built on Apple's MLX framework, providing efficient speec…☆2,592Updated this week
- Official inference library for pre-processing of Mistral models☆778Updated 2 weeks ago
- VS Code extension for LLM-assisted code/text completion☆917Updated this week
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆278Updated 2 months ago
- ☆297Updated 4 months ago
- Efficient framework-agnostic data loading☆432Updated 2 months ago
- MobileLLM Optimizing Sub-billion Parameter Language Models for On-Device Use Cases. In ICML 2024.☆1,315Updated 4 months ago
- Real Time Speech Transcription with FastRTC ⚡️and Local Whisper 🤗☆675Updated last month
- Recipes for shrinking, optimizing, customizing cutting edge vision models. 💜☆1,571Updated last week
- WebAssembly binding for llama.cpp - Enabling on-browser LLM inference☆843Updated 3 weeks ago