ml-explore / mlxLinks
MLX: An array framework for Apple silicon
☆23,409Updated this week
Alternatives and similar repositories for mlx
Users that are interested in mlx are comparing it to the libraries listed below
Sorting:
- Examples in the MLX framework☆8,120Updated last month
- Tensor library for machine learning☆13,812Updated this week
- ☆8,675Updated last year
- Distribute and run LLMs with a single file.☆23,611Updated this week
- LLM inference in C/C++☆92,930Updated this week
- Universal LLM Deployment Engine with ML Compilation☆21,859Updated 2 weeks ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆67,159Updated this week
- An Extensible Deep Learning Library☆2,311Updated this week
- Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek, Qwen, Llama, Gemma, TTS 2x faster with 70% less VRAM.☆50,491Updated this week
- High-speed Large Language Model Serving for Local Deployment☆8,548Updated 5 months ago
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.☆13,087Updated last week
- Run, manage, and scale AI workloads on any AI infrastructure. Use one system to access & manage all AI compute (Kubernetes, 20+ clouds, o…☆9,215Updated this week
- Python bindings for llama.cpp☆9,889Updated 5 months ago
- Development repository for the Triton language and compiler☆18,098Updated this week
- Stable Diffusion with Core ML on Apple Silicon☆17,778Updated 6 months ago
- CoreNet: A library for training deep neural networks☆7,022Updated 3 months ago
- Large Language Model Text Generation Inference☆10,728Updated last week
- Reference implementation of the Transformer architecture optimized for Apple Neural Engine (ANE)☆2,672Updated 2 years ago
- PyTorch native post-training library☆5,642Updated this week
- Inference Llama 2 in one file of pure C☆19,106Updated last year
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,175Updated 4 months ago
- Modeling, training, eval, and inference code for OLMo☆6,280Updated last month
- Welcome to the Llama Cookbook! This is your go to guide for Building with Llama: Getting started with Inference, Fine-Tuning, RAG. We als…☆18,141Updated 2 months ago
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆24,274Updated last year
- Official inference library for Mistral models☆10,619Updated last month
- Run LLMs with MLX☆3,271Updated this week
- Go ahead and axolotl questions☆11,050Updated this week
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆20,424Updated last week
- Fast and memory-efficient exact attention☆21,516Updated this week
- Minimal, clean code for the Byte Pair Encoding (BPE) algorithm commonly used in LLM tokenization.☆10,261Updated last year