woolfel / ml-macos-performanceLinks
☆98Updated 2 years ago
Alternatives and similar repositories for ml-macos-performance
Users that are interested in ml-macos-performance are comparing it to the libraries listed below
Sorting:
- ☆32Updated 2 years ago
- TensorFlow Metal Backend on Apple Silicon Experiments (just for fun)☆279Updated 3 years ago
- Benchmark of Apple MLX operations on all Apple Silicon chips (GPU, CPU) + MPS and CUDA.☆215Updated last month
- Efficient framework-agnostic data loading☆459Updated 4 months ago
- CLI to demonstrate running a large language model (LLM) on Apple Neural Engine.☆121Updated last year
- Large Language Models (LLMs) applications and tools running on Apple Silicon in real-time with Apple MLX.☆458Updated last year
- FlashAttention (Metal Port)☆579Updated last year
- LM Studio Apple MLX engine☆883Updated last week
- Spying on Apple’s new predictive text model☆136Updated 2 years ago
- Run transformers (incl. LLMs) on the Apple Neural Engine.☆64Updated 2 years ago
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆284Updated 7 months ago
- ☆197Updated 10 months ago
- Sudoless Asitop☆89Updated last year
- Tool for exporting Apple Neural Engine-accelerated versions of transformers models on HuggingFace Hub.☆13Updated 2 years ago
- Your gateway to both Ollama & Apple MlX models☆150Updated 11 months ago
- C API for MLX☆172Updated last week
- Export Hugging Face models to Core ML and TensorFlow Lite☆691Updated last year
- A few quick scripts focused on testing TensorFlow/PyTorch/Llama 2 on macOS.☆201Updated last year
- MLX-Embeddings is the best package for running Vision and Language Embedding models locally on your Mac using MLX.☆269Updated 3 weeks ago
- Explore a simple example of utilizing MLX for RAG application running locally on your Apple Silicon device.☆178Updated 2 years ago
- Start a server from the MLX library.☆198Updated last year
- FastMLX is a high performance production ready API to host MLX models.☆342Updated 10 months ago
- The easiest way to run the fastest MLX-based LLMs locally☆310Updated last year
- port of Andrjey Karpathy's llm.c to Mojo☆363Updated 6 months ago
- 1.58 Bit LLM on Apple Silicon using MLX☆243Updated last year
- Mac app to demonstrate swift-transformers☆593Updated last year
- Print all known information about the GPU on Apple-designed chips☆95Updated 3 months ago
- For inferring and serving local LLMs using the MLX framework☆110Updated last year
- llama.cpp based AI chat app for macOS☆497Updated last year
- User Interface made for Ollama.ai using Swift☆359Updated 7 months ago