kspviswa / pyOllaMxView external linksLinks
Your gateway to both Ollama & Apple MlX models
☆150Mar 2, 2025Updated 11 months ago
Alternatives and similar repositories for pyOllaMx
Users that are interested in pyOllaMx are comparing it to the libraries listed below
Sorting:
- A wannabe Ollama equivalent for Apple MlX models☆83Mar 2, 2025Updated 11 months ago
- MLX Transformers is a library that provides model implementation in MLX. It uses a similar model interface as HuggingFace Transformers an…☆73Nov 19, 2024Updated last year
- 🧠 Retrieval Augmented Generation (RAG) example☆19Aug 18, 2025Updated 5 months ago
- The easiest way to run the fastest MLX-based LLMs locally☆313Oct 30, 2024Updated last year
- ☆197Mar 17, 2025Updated 10 months ago
- Gradio chat interface for FastMLX☆12Sep 22, 2024Updated last year
- ☆15May 17, 2024Updated last year
- A tiny server to run local inference on MLX model in the style of OpenAI☆13Jan 31, 2024Updated 2 years ago
- Very basic framework for composable parameterized large language model (Q)LoRA / (Q)Dora fine-tuning using mlx, mlx_lm, and OgbujiPT.☆43Jun 20, 2025Updated 7 months ago
- This is a FastAPI based LLM server. Load multiple LLM models (MLX or llama.cpp) simultaneously using multiprocessing.☆16Feb 5, 2026Updated last week
- A simple UI / Web / Frontend for MLX mlx-lm using Streamlit.☆260Oct 25, 2025Updated 3 months ago
- Distributed Inference for mlx LLm☆100Aug 1, 2024Updated last year
- A simple script to enhance text editing across your Mac, leveraging the power of MLX. Designed for seamless integration, it offers real-t…☆109Mar 4, 2024Updated last year
- Examples for using the SiLLM framework for training and running Large Language Models (LLMs) on Apple Silicon☆16May 8, 2025Updated 9 months ago
- For inferring and serving local LLMs using the MLX framework☆110Mar 24, 2024Updated last year
- FastMLX is a high performance production ready API to host MLX models.☆342Mar 18, 2025Updated 10 months ago
- A multi-platform SwiftUI frontend for running local LLMs with Apple's MLX framework.☆430Oct 27, 2024Updated last year
- An iOS app written in Swift that tracks sleep habits, built to learn the HealthKit API.☆14Nov 21, 2014Updated 11 years ago
- Generate train.jsonl and valid.jsonl files to use for fine-tuning Mistral and other LLMs.☆97Feb 5, 2024Updated 2 years ago
- the small distributed language model toolkit; fine-tune state-of-the-art LLMs anywhere, rapidly☆32Oct 19, 2024Updated last year
- ☆25Jan 21, 2026Updated 3 weeks ago
- Simple Implementation of a Transformer in the new framework MLX by Apple☆19Nov 18, 2024Updated last year
- A fast minimalistic implementation of guided generation on Apple Silicon using Outlines and MLX☆59Feb 9, 2024Updated 2 years ago
- ☆20Oct 8, 2024Updated last year
- GenAI & agent toolkit for Apple Silicon Mac, implementing JSON schema-steered structured output (3SO) and tool-calling in Python. For mor…☆132Dec 8, 2025Updated 2 months ago
- OllamaLab: a fully fledged AI assistant utilizing Ollama with Companion Mode for MacOS☆34Sep 11, 2024Updated last year
- Graph Neural Network library made for Apple Silicon☆207Aug 21, 2025Updated 5 months ago
- A CLI in Rust to generate synthetic data for MLX friendly training☆25Jan 13, 2024Updated 2 years ago
- Shared personal notes created while working with the Apple MLX machine learning framework☆24Dec 12, 2025Updated 2 months ago
- An open source deep research clone. AI Agent (Local LLM or Gemini) that reasons large amounts of web data extracted with SwiftSoup.☆13Feb 10, 2025Updated last year
- ☆38Mar 12, 2024Updated last year
- Scripts to create your own moe models using mlx☆90Feb 26, 2024Updated last year
- A little file for doing LLM-assisted prompt expansion and image generation using Flux.schnell - complete with prompt history, prompt queu…☆26Aug 16, 2024Updated last year
- MLX-GUI MLX Inference Server for Apple Silicone☆184Jan 13, 2026Updated last month
- MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically designed for Apple Silicon (M-series) chips. I…☆662Dec 21, 2025Updated last month
- MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac using MLX.☆2,135Updated this week
- Chat with MLX is a high-performance macOS application that connects your local documents to a personalized large language model (LLM).☆178Mar 8, 2024Updated last year
- An mlx project to train a base model on your whatsapp chats using (Q)Lora finetuning☆172Jan 14, 2024Updated 2 years ago
- Electron-based desktop applications for various AI chat platforms.☆23Jun 30, 2025Updated 7 months ago