smpanaro / coreml-llm-cli
CLI to demonstrate running a large language model (LLM) on Apple Neural Engine.
☆66Updated 2 weeks ago
Related projects ⓘ
Alternatives and complementary repositories for coreml-llm-cli
- Swift Core ML Examples☆164Updated 2 months ago
- Swift implementation of Flux.1 using mlx-swift☆63Updated 2 weeks ago
- Tool for exporting Apple Neural Engine-accelerated versions of transformers models on HuggingFace Hub.☆11Updated last year
- Run transformers (incl. LLMs) on the Apple Neural Engine.☆53Updated last year
- MLX Swift implementation of Andrej Karpathy's Let's build GPT video☆54Updated 7 months ago
- Tool for visual profiling Core ML models, compatible with both package and compiled versions, including reasons for unsupported operation…☆21Updated 5 months ago
- Port of Suno's Bark TTS transformer in Apple's MLX Framework☆72Updated 9 months ago
- Find out why your CoreML model isn't running on the Neural Engine!☆24Updated 5 months ago
- See the device (CPU/GPU/ANE) and estimated cost for every layer in your CoreML model.☆18Updated 5 months ago
- Run embedding models locally in Swift using MLTensor.☆18Updated this week
- Minimal, clean code implementation of RAG with mlx using gguf model weights☆43Updated 6 months ago
- Implementation of F5-TTS in Swift using MLX☆42Updated last month
- ☆51Updated last year
- ☆62Updated this week
- mlx image models for Apple Silicon machines☆69Updated 6 months ago
- Python tools for WhisperKit: Model conversion, optimization and evaluation☆171Updated 2 weeks ago
- Explore a simple example of utilizing MLX for RAG application running locally on your Apple Silicon device.☆145Updated 9 months ago
- CLIP-Finder enables semantic offline searches of images from gallery photos using natural language descriptions or the camera. Built on A…☆61Updated 3 months ago
- Fork of Apple MLX swift example with addition of macOS SwiftUI App☆51Updated 9 months ago
- A multi-platform SwiftUI frontend for running local LLMs with Apple's MLX framework.☆357Updated 3 weeks ago
- Fast parallel LLM inference for MLX☆149Updated 4 months ago
- For inferring and serving local LLMs using the MLX framework☆89Updated 7 months ago
- ☆101Updated 3 months ago
- Local ML voice chat using high-end models.☆146Updated this week
- MLX Image Models☆22Updated 8 months ago
- ☆39Updated 5 months ago
- A module enabling the integration of Large Language Models (LLMs) with the Spezi Ecosystem☆148Updated this week
- FastMLX is a high performance production ready API to host MLX models.☆220Updated this week
- ☆14Updated 6 months ago
- Profile your CoreML models directly from Python 🐍☆24Updated last month