anurmatov / mac-studio-serverLinks
Optimized Ollama LLM server configuration for Mac Studio and other Apple Silicon Macs. Headless setup with automatic startup, resource optimization, and remote management via SSH.
☆275Updated this week
Alternatives and similar repositories for mac-studio-server
Users that are interested in mac-studio-server are comparing it to the libraries listed below
Sorting:
- MLX-GUI MLX Inference Server for Apple Silicone☆170Updated 2 weeks ago
- This is a cross-platform desktop application that allows you to chat with locally hosted LLMs and enjoy features like MCP support☆226Updated 5 months ago
- MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically designed for Apple Silicon (M-series) chips. I…☆644Updated last month
- llmbasedos — Local-First OS Where Your AI Agents Wake Up and Work☆280Updated 3 weeks ago
- Parse files (e.g. code repos) and websites to clipboard or a file for ingestions by AI / LLMs☆361Updated last week
- Your gateway to both Ollama & Apple MlX models☆149Updated 10 months ago
- High-performance MLX-based LLM inference engine for macOS with native Swift implementation☆471Updated last week
- Command-line personal assistant using your favorite proprietary or local models with access to over 30+ tools☆111Updated 7 months ago
- MCP server for enabling LLM applications to perform deep research via the MCP protocol☆309Updated 2 months ago
- A multi-agent AI architecture that connects 25+ specialized agents through n8n and MCP servers. Project NOVA routes requests to domain-sp…☆256Updated 7 months ago
- Overide (pronounced over·ide) is a lightweight, yet powerful CLI tool that seamlessly integrates AI-powered code generation into your dev…☆192Updated 6 months ago
- Accessing Apple Intelligence and ChatGPT desktop through OpenAI / Ollama API☆333Updated 5 months ago
- ollama like cli tool for MLX models on huggingface (pull, rm, list, show, serve etc.)☆124Updated last week
- Local AI voice assistant stack for Home Assistant (GPU-accelerated) with persistent memory, follow-up conversation, and Ollama model reco…☆226Updated 5 months ago
- A document based RAG application☆130Updated 9 months ago
- The only general AI agent that does NOT requires extra API key, giving you full control on your local and remote MacOs from Claude Deskto…☆450Updated 7 months ago
- FastMLX is a high performance production ready API to host MLX models.☆338Updated 10 months ago
- Train Large Language Models on MLX.☆241Updated last week
- Claude Code with any LLM☆245Updated 5 months ago
- Nginx proxy server in a Docker container to Authenticate & Proxy requests to Ollama from Public Internet via Cloudflare Tunnel☆157Updated 4 months ago
- Run and monitor MCP servers locally☆199Updated 5 months ago
- 'afm' command cli: macOS server and single prompt mode that exposes Apple's Foundation Models through OpenAI-compatible API endpoints. Su…☆89Updated 2 weeks ago
- Fast local speech-to-text for any app using faster-whisper☆145Updated 4 months ago
- A MCP server allowing LLM agents to easily connect and retrieve data from any database☆99Updated 5 months ago
- Ollama desktop client for everyday use☆89Updated 8 months ago
- Notate is a desktop chat application that takes AI conversations to the next level. It combines the simplicity of chat with advanced feat…☆262Updated 11 months ago
- You don’t need to read the code to understand how to build!☆254Updated last week
- Welcome!☆141Updated last year
- A lightweight UI for chatting with Ollama models. Streaming responses, conversation history, and multi-model support.☆147Updated 10 months ago
- Local debugging agent that runs in your terminal☆395Updated 7 months ago