ml-explore / mlxLinks
MLX: An array framework for Apple silicon
☆23,707Updated this week
Alternatives and similar repositories for mlx
Users that are interested in mlx are comparing it to the libraries listed below
Sorting:
- Examples in the MLX framework☆8,189Updated this week
- LLM inference in C/C++☆94,330Updated this week
- ☆8,672Updated last year
- CoreNet: A library for training deep neural networks☆7,018Updated 3 months ago
- Tensor library for machine learning☆13,907Updated this week
- Run LLMs with MLX☆3,492Updated this week
- Inference Llama 2 in one file of pure C☆19,146Updated last year
- Universal LLM Deployment Engine with ML Compilation☆21,981Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆69,007Updated this week
- An Extensible Deep Learning Library☆2,317Updated this week
- llama3 implementation one matrix multiplication at a time☆15,240Updated last year
- Open-source search and retrieval database for AI applications.☆25,956Updated this week
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,830Updated last year
- LLM training in simple, raw C/CUDA☆28,763Updated 7 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆52,437Updated 2 months ago
- tiktoken is a fast BPE tokeniser for use with OpenAI's models.☆17,127Updated 3 months ago
- Python bindings for llama.cpp☆9,931Updated 5 months ago
- Official inference library for Mistral models☆10,653Updated 2 months ago
- PyTorch native post-training library☆5,660Updated this week
- Perf monitoring CLI tool for Apple Silicon☆4,423Updated last year
- Reference implementation of the Transformer architecture optimized for Apple Neural Engine (ANE)☆2,672Updated 2 years ago
- Stable Diffusion with Core ML on Apple Silicon☆17,790Updated 7 months ago
- Run any open-source LLMs, such as DeepSeek and Llama, as OpenAI compatible API endpoint in the cloud.☆12,080Updated last week
- Port of OpenAI's Whisper model in C/C++☆46,315Updated this week
- Development repository for the Triton language and compiler☆18,319Updated this week
- Run, manage, and scale AI workloads on any AI infrastructure. Use one system to access & manage all AI compute (Kubernetes, 20+ clouds, o…☆9,418Updated this week
- Distribute and run LLMs with a single file.☆23,688Updated this week
- Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with cost tracking, guardrails, loadbalancing a…☆34,881Updated this week
- Run frontier AI locally.☆40,998Updated this week
- Minimal, clean code for the Byte Pair Encoding (BPE) algorithm commonly used in LLM tokenization.☆10,293Updated last year