Goekdeniz-Guelmez / mlx-lm-lensLinks
Find the hidden meaning of LLMs
☆19Updated last week
Alternatives and similar repositories for mlx-lm-lens
Users that are interested in mlx-lm-lens are comparing it to the libraries listed below
Sorting:
- MLX-GUI MLX Inference Server☆69Updated last week
- Train Large Language Models on MLX.☆133Updated this week
- MLX-Embeddings is the best package for running Vision and Language Embedding models locally on your Mac using MLX.☆179Updated last month
- Blazing fast whisper turbo for ASR (speech-to-text) tasks☆212Updated 8 months ago
- A lightweight recreation of OS1/Samantha from the movie Her, running locally in the browser☆105Updated 2 weeks ago
- MLX-based QA pair generator and LLM finetuning tool in Streamlit☆36Updated 7 months ago
- MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically designed for Apple Silicon (M-series) chips. I…☆442Updated last week
- A python package for serving LLM on OpenAI-compatible API endpoints with prompt caching using MLX.☆89Updated 2 weeks ago
- An implementation of the CSM(Conversation Speech Model) for Apple Silicon using MLX.☆366Updated 2 months ago
- Distributed Inference for mlx LLm☆93Updated 11 months ago
- GenAI & agent toolkit for Apple Silicon Mac, implementing JSON schema-steered structured output (3SO) and tool-calling in Python. For mor…☆128Updated last month
- A command-line utility to manage MLX models between your Hugging Face cache and LM Studio.☆59Updated 4 months ago
- FastMLX is a high performance production ready API to host MLX models.☆313Updated 4 months ago
- A simple MLX implementation for pretraining LLMs on Apple Silicon.☆81Updated 2 months ago
- Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon☆271Updated 10 months ago
- ☆146Updated 2 weeks ago
- ☆101Updated last month
- For inferring and serving local LLMs using the MLX framework☆105Updated last year
- powerful and fast tool calling agents☆50Updated 4 months ago
- Explore a simple example of utilizing MLX for RAG application running locally on your Apple Silicon device.☆172Updated last year
- Optimized Ollama LLM server configuration for Mac Studio and other Apple Silicon Macs. Headless setup with automatic startup, resource op…☆205Updated 4 months ago
- The easiest way to run the fastest MLX-based LLMs locally☆289Updated 8 months ago
- Guaranteed Structured Output from any Language Model via Hierarchical State Machines☆140Updated last month
- Groq Compound Beta MCP Server☆27Updated last month
- CLI tool for text to image generation using the FLUX.1 model.☆60Updated 2 weeks ago
- An OpenSource Deep Research library with reasoning☆148Updated last month
- Letting Claude Code develop his own MCP tools :)☆114Updated 4 months ago
- A wannabe Ollama equivalent for Apple MlX models☆72Updated 4 months ago
- A cross patform app that unlocks your devices Gen AI capabilities☆60Updated this week
- ☆180Updated 4 months ago