TristanBilot / mlx-benchmark
Benchmark of Apple MLX operations on all Apple Silicon chips (GPU, CPU) + MPS and CUDA.
☆176Updated 3 weeks ago
Alternatives and similar repositories for mlx-benchmark:
Users that are interested in mlx-benchmark are comparing it to the libraries listed below
- Efficient framework-agnostic data loading☆419Updated this week
- Start a server from the MLX library.☆185Updated 9 months ago
- MLX-Embeddings is the best package for running Vision and Language Embedding models locally on your Mac using MLX.☆147Updated 2 weeks ago
- ☆168Updated last month
- Port of Andrej Karpathy's nanoGPT to Apple MLX framework.☆105Updated last year
- 1.58 Bit LLM on Apple Silicon using MLX☆204Updated 11 months ago
- Fast parallel LLM inference for MLX☆186Updated 10 months ago
- Large Language Models (LLMs) applications and tools running on Apple Silicon in real-time with Apple MLX.☆442Updated 3 months ago
- FastMLX is a high performance production ready API to host MLX models.☆297Updated last month
- Explore a simple example of utilizing MLX for RAG application running locally on your Apple Silicon device.☆168Updated last year
- run embeddings in MLX☆87Updated 7 months ago
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆264Updated this week
- MLX Transformers is a library that provides model implementation in MLX. It uses a similar model interface as HuggingFace Transformers an…☆65Updated 5 months ago
- A simple UI / Web / Frontend for MLX mlx-lm using Streamlit.☆252Updated 3 months ago
- Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon☆265Updated 8 months ago
- FlashAttention (Metal Port)☆483Updated 7 months ago
- Graph Neural Network library made for Apple Silicon☆189Updated 7 months ago
- C API for MLX☆107Updated last week
- Distributed Inference for mlx LLm☆89Updated 9 months ago
- MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically designed for Apple Silicon (M-series) chips. I…☆350Updated 3 weeks ago
- Benchmarks comparing PyTorch and MLX on Apple Silicon GPUs☆79Updated 9 months ago
- A reinforcement learning framework based on MLX.☆233Updated 2 months ago
- For inferring and serving local LLMs using the MLX framework☆103Updated last year
- The easiest way to run the fastest MLX-based LLMs locally☆279Updated 6 months ago
- Apple MLX engine for LM Studio☆535Updated last week
- Scripts to create your own moe models using mlx☆89Updated last year
- A simple MLX implementation for pretraining LLMs on Apple Silicon.☆73Updated this week
- Python tools for WhisperKit: Model conversion, optimization and evaluation☆212Updated this week
- CLI to demonstrate running a large language model (LLM) on Apple Neural Engine.☆101Updated 4 months ago
- Chat with MLX is a high-performance macOS application that connects your local documents to a personalized large language model (LLM).☆174Updated last year