madroidmaq / mlx-omni-serverView on GitHub
MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically designed for Apple Silicon (M-series) chips. It implements OpenAI-compatible API endpoints, enabling seamless integration with existing OpenAI SDK clients while leveraging the power of local ML inference.
688Mar 10, 2026Updated 2 weeks ago

Alternatives and similar repositories for mlx-omni-server

Users that are interested in mlx-omni-server are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.

Sorting:

Are these results useful?