madroidmaq / mlx-omni-serverLinks
MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically designed for Apple Silicon (M-series) chips. It implements OpenAI-compatible API endpoints, enabling seamless integration with existing OpenAI SDK clients while leveraging the power of local ML inference.
☆451Updated last week
Alternatives and similar repositories for mlx-omni-server
Users that are interested in mlx-omni-server are comparing it to the libraries listed below
Sorting:
- FastMLX is a high performance production ready API to host MLX models.☆315Updated 4 months ago
- An implementation of the CSM(Conversation Speech Model) for Apple Silicon using MLX.☆366Updated 2 months ago
- ☆182Updated 4 months ago
- MLX-Embeddings is the best package for running Vision and Language Embedding models locally on your Mac using MLX.☆182Updated last month
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆275Updated last month
- The easiest way to run the fastest MLX-based LLMs locally☆291Updated 8 months ago
- Apple MLX engine for LM Studio☆685Updated this week
- High-performance MLX-based LLM inference engine for macOS with native Swift implementation☆294Updated this week
- Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon☆271Updated 10 months ago
- Train Large Language Models on MLX.☆133Updated this week
- MLX-GUI MLX Inference Server☆77Updated last week
- An implementation of the Nvidia's Parakeet models for Apple Silicon using MLX.☆382Updated last week
- Blazing fast whisper turbo for ASR (speech-to-text) tasks☆212Updated 9 months ago
- Large Language Models (LLMs) applications and tools running on Apple Silicon in real-time with Apple MLX.☆447Updated 5 months ago
- MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac using MLX.☆1,520Updated this week
- An extremely fast implementation of whisper optimized for Apple Silicon using MLX.☆742Updated last year
- A simple UI / Web / Frontend for MLX mlx-lm using Streamlit.☆258Updated last month
- ☆292Updated 3 months ago
- Fast parallel LLM inference for MLX☆198Updated last year
- Your gateway to both Ollama & Apple MlX models☆140Updated 4 months ago
- Start a server from the MLX library.☆188Updated 11 months ago
- 📋 NotebookMLX - An Open Source version of NotebookLM (Ported NotebookLlama)☆305Updated 4 months ago
- On-device Image Generation for Apple Silicon☆632Updated 3 months ago
- Run LLMs with MLX☆1,322Updated this week
- Optimized Ollama LLM server configuration for Mac Studio and other Apple Silicon Macs. Headless setup with automatic startup, resource op…☆205Updated 4 months ago
- 🤖✨ChatMLX is a modern, open-source, high-performance chat application for MacOS based on large language models.☆798Updated 4 months ago
- Explore a simple example of utilizing MLX for RAG application running locally on your Apple Silicon device.☆172Updated last year
- Implementation of F5-TTS in MLX☆562Updated 4 months ago
- MLX Model Manager unifies loading and inferencing with LLMs and VLMs.☆96Updated 5 months ago
- Chat with MLX is a high-performance macOS application that connects your local documents to a personalized large language model (LLM).☆175Updated last year