Blaizzy / mlx-vlm
MLX-VLM is a package for running Vision LLMs locally on your Mac using MLX.
☆496Updated this week
Related projects ⓘ
Alternatives and complementary repositories for mlx-vlm
- FastMLX is a high performance production ready API to host MLX models.☆218Updated 3 weeks ago
- On-device Inference of Diffusion Models for Apple Silicon☆509Updated 3 weeks ago
- 👾🍎 Apple MLX engine for LM Studio☆213Updated this week
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆226Updated this week
- Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon☆237Updated 2 months ago
- Fast parallel LLM inference for MLX☆149Updated 4 months ago
- A simple UI / Web / Frontend for MLX mlx-lm using Streamlit.☆227Updated last month
- Large Language Models (LLMs) applications and tools running on Apple Silicon in real-time with Apple MLX.☆348Updated 2 months ago
- ☆284Updated last month
- Explore a simple example of utilizing MLX for RAG application running locally on your Apple Silicon device.☆145Updated 9 months ago
- Start a server from the MLX library.☆161Updated 3 months ago
- gpt-2 from scratch in mlx☆358Updated 5 months ago
- The easiest way to run the fastest MLX-based LLMs locally☆212Updated 3 weeks ago
- A MLX port of FLUX based on the Huggingface Diffusers implementation.☆988Updated this week
- Use late-interaction multi-modal models such as ColPali in just a few lines of code.☆617Updated last week
- Gemma 2 optimized for your local machine.☆343Updated 3 months ago
- An mlx project to train a base model on your whatsapp chats using (Q)Lora finetuning☆159Updated 10 months ago
- ☆129Updated 3 weeks ago
- Recipes for learning, fine-tuning, and adapting ColPali to your multimodal RAG use cases. 👨🏻🍳☆175Updated this week
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆221Updated 6 months ago
- ☆99Updated 3 months ago
- An extremely fast implementation of whisper optimized for Apple Silicon using MLX.☆584Updated 6 months ago
- Implementation of F5-TTS in MLX☆327Updated 2 weeks ago
- ☆204Updated 4 months ago
- Blazing fast whisper turbo for ASR (speech-to-text) tasks☆159Updated last month
- A comprehensive repository of reasoning tasks for LLMs (and beyond)☆282Updated last month
- ☆718Updated 2 months ago
- 📋 NotebookMLX - An Open Source version of NotebookLM (Ported NotebookLlama)☆180Updated 2 weeks ago
- An application for running LLMs locally on your device, with your documents, facilitating detailed citations in generated responses.☆498Updated 3 weeks ago
- Python tools for WhisperKit: Model conversion, optimization and evaluation☆171Updated last week