woolfel / ml-macos-performanceLinks
☆94Updated 2 years ago
Alternatives and similar repositories for ml-macos-performance
Users that are interested in ml-macos-performance are comparing it to the libraries listed below
Sorting:
- ☆33Updated 2 years ago
- TensorFlow Metal Backend on Apple Silicon Experiments (just for fun)☆279Updated 3 years ago
- Benchmark of Apple MLX operations on all Apple Silicon chips (GPU, CPU) + MPS and CUDA.☆196Updated 2 months ago
- A collection of ML scripts to test the M1 Pro MacBook Pro☆171Updated 2 years ago
- CLI to demonstrate running a large language model (LLM) on Apple Neural Engine.☆116Updated 8 months ago
- Efficient framework-agnostic data loading☆436Updated 2 months ago
- Large Language Models (LLMs) applications and tools running on Apple Silicon in real-time with Apple MLX.☆454Updated 7 months ago
- Sudoless Asitop☆81Updated last year
- Export Hugging Face models to Core ML and TensorFlow Lite☆672Updated last year
- C API for MLX☆125Updated last month
- Spying on Apple’s new predictive text model☆136Updated last year
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆278Updated 2 months ago
- Start a server from the MLX library.☆191Updated last year
- LM Studio Apple MLX engine☆756Updated this week
- Explore a simple example of utilizing MLX for RAG application running locally on your Apple Silicon device.☆174Updated last year
- ☆186Updated 5 months ago
- Your gateway to both Ollama & Apple MlX models☆143Updated 6 months ago
- Run transformers (incl. LLMs) on the Apple Neural Engine.☆62Updated last year
- On-device Image Generation for Apple Silicon☆649Updated 4 months ago
- FastMLX is a high performance production ready API to host MLX models.☆325Updated 5 months ago
- Mac app to demonstrate swift-transformers☆570Updated last year
- A simple UI / Web / Frontend for MLX mlx-lm using Streamlit.☆261Updated 2 months ago
- A wannabe Ollama equivalent for Apple MlX models☆79Updated 6 months ago
- The easiest way to run the fastest MLX-based LLMs locally☆297Updated 10 months ago
- llama.cpp based AI chat app for macOS☆497Updated 9 months ago
- Tool for exporting Apple Neural Engine-accelerated versions of transformers models on HuggingFace Hub.☆13Updated 2 years ago
- MLX-Embeddings is the best package for running Vision and Language Embedding models locally on your Mac using MLX.☆193Updated 2 weeks ago
- For inferring and serving local LLMs using the MLX framework☆109Updated last year
- User Interface made for Ollama.ai using Swift☆352Updated last month
- LLM training in simple, raw C/Metal Shading Language☆55Updated last year