mrdbourke / mac-ml-speed-test
A few quick scripts focused on testing TensorFlow/PyTorch/Llama 2 on macOS.
β185Updated 8 months ago
Alternatives and similar repositories for mac-ml-speed-test:
Users that are interested in mac-ml-speed-test are comparing it to the libraries listed below
- Benchmark of Apple MLX operations on all Apple Silicon chips (GPU, CPU) + MPS and CUDA.β149Updated 2 months ago
- Setup PyTorch on Mac/Apple Silicon plus a few benchmarks.β424Updated last year
- πΎπ Apple MLX engine for LM Studioβ348Updated this week
- FastMLX is a high performance production ready API to host MLX models.β256Updated 2 months ago
- Graph Neural Network library made for Apple Siliconβ179Updated 3 months ago
- The easiest way to run the fastest MLX-based LLMs locallyβ240Updated 3 months ago
- β315Updated 3 months ago
- TensorFlow Metal Backend on Apple Silicon Experiments (just for fun)β277Updated 2 years ago
- A bunch of experiments using Large Language Modelsβ188Updated 8 months ago
- Your gateway to both Ollama & Apple MlX modelsβ78Updated this week
- β125Updated 3 weeks ago
- Explore a simple example of utilizing MLX for RAG application running locally on your Apple Silicon device.β154Updated 11 months ago
- β92Updated last year
- Mac compatible Ollama Voiceβ454Updated 10 months ago
- huggingface chat-ui integration with mlx-lm serverβ60Updated 11 months ago
- MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically designed for Apple Silicon (M-series) chips. Iβ¦β241Updated 2 weeks ago
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.β246Updated last week
- β75Updated 4 months ago
- Large Language Models (LLMs) applications and tools running on Apple Silicon in real-time with Apple MLX.β396Updated 2 weeks ago
- Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Siliconβ259Updated 4 months ago
- MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac using MLX.β745Updated this week
- Inference code for LLaMA models on CPU and Mac M1/M2 GPUβ78Updated last year
- β18Updated last year
- Grammar checker with a keyboard shortcut for Ollama and Apple MLX with Automator on macOS.β77Updated 11 months ago
- Efficient framework-agnostic data loadingβ392Updated this week
- Created and enhanced a local LLM training system on Apple Silicon with MLX and Metal API, overcoming the absence of CUDA support. Fine-tuβ¦β17Updated 8 months ago
- Self-paced bootcamp on Generative AI. Tutorials on ML fundamentals, LLMs, RAGs, LangChain, LangGraph, Fine-tuning Llama 3 & AI Agents (Crβ¦β380Updated last week
- β19Updated 10 months ago
- β48Updated 8 months ago
- β172Updated 5 months ago