itsmostafa / inference-speed-testsLinks
Local LLM inference speed tests on various devices
☆109Updated 8 months ago
Alternatives and similar repositories for inference-speed-tests
Users that are interested in inference-speed-tests are comparing it to the libraries listed below
Sorting:
- Optimized Ollama LLM server configuration for Mac Studio and other Apple Silicon Macs. Headless setup with automatic startup, resource op…☆270Updated 9 months ago
- Your gateway to both Ollama & Apple MlX models☆149Updated 9 months ago
- AI agent that controls computer with OS-level tools, MCP compatible, works with any model☆120Updated 3 months ago
- MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically designed for Apple Silicon (M-series) chips. I…☆620Updated last month
- High-performance MLX-based LLM inference engine for macOS with native Swift implementation☆446Updated 2 months ago
- Accessing Apple Intelligence and ChatGPT desktop through OpenAI / Ollama API☆302Updated 3 months ago
- MacOS menu‑bar utility to adjust Apple Silicon GPU VRAM allocation☆230Updated 7 months ago
- This is a cross-platform desktop application that allows you to chat with locally hosted LLMs and enjoy features like MCP support☆226Updated 3 months ago
- MLX-GUI MLX Inference Server for Apple Silicone☆151Updated 3 months ago
- A wannabe Ollama equivalent for Apple MlX models☆82Updated 9 months ago
- CoexistAI is a modular, developer-friendly research assistant framework . It enables you to build, search, summarize, and automate resear…☆375Updated last month
- A beautiful local-first coding agent running in your terminal - built by the community for the community ⚒☆929Updated this week
- A macOS AppleScript MCP server☆325Updated 7 months ago
- Local coding agent with neat UI☆331Updated 6 months ago
- An implementation of the Nvidia's Parakeet models for Apple Silicon using MLX.☆681Updated 2 weeks ago
- Local Apple Notes + LLM Chat☆96Updated last month
- Qwen Image models through MPS☆241Updated 3 weeks ago
- Support for MLX models in LLM☆225Updated 7 months ago
- Parse files (e.g. code repos) and websites to clipboard or a file for ingestions by AI / LLMs☆315Updated last week
- macOS whisper dictation app☆491Updated last week
- a magical LLM desktop client that makes it easy for *anyone* to use LLMs and MCP☆569Updated last month
- MCP server that execute applescript giving you full control of your Mac☆382Updated 2 weeks ago
- Jarvis AI Assistant - Voice-powered AI assistant for Mac☆105Updated last week
- A cosy home for your LLMs.☆593Updated this week
- An MCP server that securely interfaces with your iMessage database via the Model Context Protocol (MCP), allowing LLMs to query and analy…☆188Updated 3 months ago
- Welcome!☆140Updated 11 months ago
- The easiest way to run the fastest MLX-based LLMs locally☆305Updated last year
- llmbasedos — Local-First OS Where Your AI Agents Wake Up and Work☆278Updated 3 months ago
- Library to traverse and control MacOS☆182Updated 7 months ago
- Tool for scraping and consolidating documentation websites into a single MD file.☆234Updated 5 months ago