JosefAlbers / Phi-3-Vision-MLX
Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon
☆265Updated 7 months ago
Alternatives and similar repositories for Phi-3-Vision-MLX:
Users that are interested in Phi-3-Vision-MLX are comparing it to the libraries listed below
- FastMLX is a high performance production ready API to host MLX models.☆295Updated last month
- A simple UI / Web / Frontend for MLX mlx-lm using Streamlit.☆252Updated 3 months ago
- Fast parallel LLM inference for MLX☆186Updated 9 months ago
- Start a server from the MLX library.☆185Updated 9 months ago
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆263Updated this week
- Blazing fast whisper turbo for ASR (speech-to-text) tasks☆204Updated 6 months ago
- MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically designed for Apple Silicon (M-series) chips. I…☆350Updated 3 weeks ago
- MLX-Embeddings is the best package for running Vision and Language Embedding models locally on your Mac using MLX.☆142Updated 2 weeks ago
- For inferring and serving local LLMs using the MLX framework☆103Updated last year
- Explore a simple example of utilizing MLX for RAG application running locally on your Apple Silicon device.☆168Updated last year
- An implementation of the CSM(Conversation Speech Model) for Apple Silicon using MLX.☆328Updated this week
- run embeddings in MLX☆87Updated 7 months ago
- ☆167Updated last month
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆223Updated last year
- Port of Suno's Bark TTS transformer in Apple's MLX Framework☆80Updated last year
- ☆153Updated 9 months ago
- The easiest way to run the fastest MLX-based LLMs locally☆278Updated 6 months ago
- 📋 NotebookMLX - An Open Source version of NotebookLM (Ported NotebookLlama)☆278Updated 2 months ago
- A python package for serving LLM on OpenAI-compatible API endpoints with prompt caching using MLX.☆78Updated 4 months ago
- ☆112Updated 4 months ago
- ☆171Updated 8 months ago
- Chat with MLX is a high-performance macOS application that connects your local documents to a personalized large language model (LLM).☆174Updated last year
- On-device Image Generation for Apple Silicon☆612Updated 3 weeks ago
- ☆281Updated 11 months ago
- ☆345Updated 7 months ago
- Guaranteed Structured Output from any Language Model via Hierarchical State Machines☆125Updated this week
- Large Language Models (LLMs) applications and tools running on Apple Silicon in real-time with Apple MLX.☆442Updated 3 months ago
- Scripts to create your own moe models using mlx☆89Updated last year
- Distributed Inference for mlx LLm☆89Updated 9 months ago
- A simple Jupyter Notebook for learning MLX text-completion fine-tuning!☆117Updated 5 months ago