woolfel / ml-macos-performanceLinks
☆94Updated 2 years ago
Alternatives and similar repositories for ml-macos-performance
Users that are interested in ml-macos-performance are comparing it to the libraries listed below
Sorting:
- ☆33Updated 2 years ago
- TensorFlow Metal Backend on Apple Silicon Experiments (just for fun)☆279Updated 3 years ago
- Benchmark of Apple MLX operations on all Apple Silicon chips (GPU, CPU) + MPS and CUDA.☆196Updated 3 months ago
- Efficient framework-agnostic data loading☆437Updated this week
- Large Language Models (LLMs) applications and tools running on Apple Silicon in real-time with Apple MLX.☆454Updated 7 months ago
- C API for MLX☆132Updated 2 weeks ago
- CLI to demonstrate running a large language model (LLM) on Apple Neural Engine.☆116Updated 8 months ago
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆280Updated 3 months ago
- ☆185Updated 6 months ago
- Sudoless Asitop☆82Updated last year
- Run transformers (incl. LLMs) on the Apple Neural Engine.☆63Updated last year
- FlashAttention (Metal Port)☆534Updated last year
- Your gateway to both Ollama & Apple MlX models☆144Updated 6 months ago
- Tool for exporting Apple Neural Engine-accelerated versions of transformers models on HuggingFace Hub.☆13Updated 2 years ago
- Explore a simple example of utilizing MLX for RAG application running locally on your Apple Silicon device.☆175Updated last year
- FastMLX is a high performance production ready API to host MLX models.☆331Updated 6 months ago
- Start a server from the MLX library.☆192Updated last year
- mlx image models for Apple Silicon machines☆84Updated 5 months ago
- Spying on Apple’s new predictive text model☆136Updated last year
- A wannabe Ollama equivalent for Apple MlX models☆80Updated 6 months ago
- LM Studio Apple MLX engine☆786Updated this week
- MLX-Embeddings is the best package for running Vision and Language Embedding models locally on your Mac using MLX.☆209Updated 2 weeks ago
- llama.cpp based AI chat app for macOS☆497Updated 10 months ago
- 1.58 Bit LLM on Apple Silicon using MLX☆223Updated last year
- For inferring and serving local LLMs using the MLX framework☆109Updated last year
- Generate train.jsonl and valid.jsonl files to use for fine-tuning Mistral and other LLMs.☆97Updated last year
- The easiest way to run the fastest MLX-based LLMs locally☆299Updated 10 months ago
- Power Usage Monitor for Apple Silicon☆182Updated 4 months ago
- A few quick scripts focused on testing TensorFlow/PyTorch/Llama 2 on macOS.☆197Updated last year
- ☆52Updated 4 months ago