JosefAlbers / Phi-3-Vision-MLX
Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon
☆235Updated 2 months ago
Related projects ⓘ
Alternatives and complementary repositories for Phi-3-Vision-MLX
- FastMLX is a high performance production ready API to host MLX models.☆212Updated last week
- MLX-VLM is a package for running Vision LLMs locally on your Mac using MLX.☆471Updated this week
- Fast parallel LLM inference for MLX☆146Updated 4 months ago
- A simple UI / Web / Frontend for MLX mlx-lm using Streamlit.☆225Updated last month
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆223Updated last week
- Start a server from the MLX library.☆159Updated 3 months ago
- 👾🍎 Apple MLX engine for LM Studio☆171Updated this week
- Large Language Models (LLMs) applications and tools running on Apple Silicon in real-time with Apple MLX.☆328Updated 2 months ago
- Implementation of F5-TTS in MLX☆309Updated last week
- ☆275Updated last month
- 📋 NotebookMLX - An Open Source version of NotebookLM (Ported NotebookLlama)☆152Updated last week
- Blazing fast whisper turbo for ASR (speech-to-text) tasks☆143Updated 2 weeks ago
- Scripts to create your own moe models using mlx☆86Updated 8 months ago
- Explore a simple example of utilizing MLX for RAG application running locally on your Apple Silicon device.☆146Updated 9 months ago
- A python package for serving LLM on OpenAI-compatible API endpoints with prompt caching using MLX.☆51Updated last week
- ☆96Updated 2 months ago
- Port of Suno's Bark TTS transformer in Apple's MLX Framework☆71Updated 8 months ago
- MLX-Embeddings is the best package for running Vision and Language Embedding models locally on your Mac using MLX.☆75Updated 3 weeks ago
- For inferring and serving local LLMs using the MLX framework☆89Updated 7 months ago
- An mlx project to train a base model on your whatsapp chats using (Q)Lora finetuning☆159Updated 9 months ago
- WIP - Allows you to create DSPy pipelines using ComfyUI☆179Updated 3 months ago
- Solving data for LLMs - Create quality synthetic datasets!☆136Updated 3 weeks ago
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆222Updated 6 months ago
- Distributed Inference for mlx LLm☆68Updated 3 months ago
- ☆188Updated 5 months ago
- Chat with MLX is a high-performance macOS application that connects your local documents to a personalized large language model (LLM).☆162Updated 8 months ago
- ☆148Updated 3 months ago
- 1.58 Bit LLM on Apple Silicon using MLX☆134Updated 5 months ago
- automatically quant GGUF models☆137Updated this week
- On-device Inference of Diffusion Models for Apple Silicon