ivanfioravanti / qwen-image-mpsLinks
Qwen Image models through MPS
☆212Updated last week
Alternatives and similar repositories for qwen-image-mps
Users that are interested in qwen-image-mps are comparing it to the libraries listed below
Sorting:
- MLX-GUI MLX Inference Server for Apple Silicone☆124Updated last month
- An implementation of the CSM(Conversation Speech Model) for Apple Silicon using MLX.☆378Updated last month
- Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon☆272Updated last year
- FastMLX is a high performance production ready API to host MLX models.☆331Updated 6 months ago
- Train Large Language Models on MLX.☆180Updated this week
- Blazing fast whisper turbo for ASR (speech-to-text) tasks☆217Updated 11 months ago
- GenAI & agent toolkit for Apple Silicon Mac, implementing JSON schema-steered structured output (3SO) and tool-calling in Python. For mor…☆129Updated last month
- ollama like cli tool for MLX models on huggingface (pull, rm, list, show, serve etc.)☆103Updated last week
- A python package for serving LLM on OpenAI-compatible API endpoints with prompt caching using MLX.☆95Updated 3 months ago
- MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically designed for Apple Silicon (M-series) chips. I…☆556Updated last month
- MLX-Embeddings is the best package for running Vision and Language Embedding models locally on your Mac using MLX.☆210Updated 3 weeks ago
- Instant Perfect Native MacOS Transcription☆47Updated 2 months ago
- Pipecat voice AI agents running locally on macOS☆281Updated last month
- Command-line personal assistant using your favorite proprietary or local models with access to over 30+ tools☆111Updated 3 months ago
- A command-line utility to manage MLX models between your Hugging Face cache and LM Studio.☆64Updated 7 months ago
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆280Updated 3 months ago
- A lightweight recreation of OS1/Samantha from the movie Her, running locally in the browser☆108Updated 3 months ago
- The easiest way to run the fastest MLX-based LLMs locally☆302Updated 11 months ago
- A simple Jupyter Notebook for learning MLX text-completion fine-tuning!☆122Updated 10 months ago
- Find the hidden meaning of LLMs☆27Updated 2 months ago
- Port of Suno's Bark TTS transformer in Apple's MLX Framework☆85Updated last year
- Guaranteed Structured Output from any Language Model via Hierarchical State Machines☆146Updated last week
- Distributed Inference for mlx LLm☆95Updated last year
- For inferring and serving local LLMs using the MLX framework☆109Updated last year
- 📋 NotebookMLX - An Open Source version of NotebookLM (Ported NotebookLlama)☆317Updated 7 months ago
- CLI tool for text to image generation using the FLUX.1 model.☆64Updated 3 months ago
- Examples on how to use various LLM providers with a Wine Classification problem☆123Updated last month
- Fast parallel LLM inference for MLX☆220Updated last year
- A little file for doing LLM-assisted prompt expansion and image generation using Flux.schnell - complete with prompt history, prompt queu…☆26Updated last year
- Optimized Ollama LLM server configuration for Mac Studio and other Apple Silicon Macs. Headless setup with automatic startup, resource op…☆259Updated 6 months ago