ml-explore / mlxLinks
MLX: An array framework for Apple silicon
☆23,812Updated this week
Alternatives and similar repositories for mlx
Users that are interested in mlx are comparing it to the libraries listed below
Sorting:
- Examples in the MLX framework☆8,216Updated this week
- ☆8,672Updated last year
- Tensor library for machine learning☆13,907Updated last week
- Inference Llama 2 in one file of pure C☆19,146Updated last year
- Run LLMs with MLX☆3,492Updated last week
- CoreNet: A library for training deep neural networks☆7,016Updated 3 months ago
- High-speed Large Language Model Serving for Local Deployment☆8,635Updated 2 weeks ago
- An Extensible Deep Learning Library☆2,317Updated last week
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.☆13,137Updated this week
- LLM training in simple, raw C/CUDA☆28,763Updated 7 months ago
- Minimal, clean code for the Byte Pair Encoding (BPE) algorithm commonly used in LLM tokenization.☆10,293Updated last year
- LLM inference in C/C++☆94,330Updated this week
- Reference implementation of the Transformer architecture optimized for Apple Neural Engine (ANE)☆2,672Updated 2 years ago
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,886Updated last year
- Perf monitoring CLI tool for Apple Silicon☆4,423Updated last year
- Python bindings for llama.cpp☆9,958Updated 5 months ago
- ☆3,887Updated last year
- On-device Speech Recognition for Apple Silicon☆5,574Updated last week
- lightweight, standalone C++ inference engine for Google's Gemma models.☆6,721Updated last week
- PyTorch native post-training library☆5,660Updated this week
- Universal LLM Deployment Engine with ML Compilation☆21,981Updated last week
- Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek, Qwen, Llama, Gemma, TTS 2x faster with 70% less VRAM.☆51,625Updated this week
- Go ahead and axolotl questions☆11,251Updated this week
- llama3 implementation one matrix multiplication at a time☆15,241Updated last year
- Official inference library for Mistral models☆10,653Updated 2 months ago
- Run, manage, and scale AI workloads on any AI infrastructure. Use one system to access & manage all AI compute (Kubernetes, 20+ clouds, o…☆9,418Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆69,622Updated this week
- Run frontier AI locally.☆40,998Updated this week
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,180Updated 5 months ago
- You like pytorch? You like micrograd? You love tinygrad! ❤️☆31,291Updated this week