ml-explore / mlx-examplesLinks
Examples in the MLX framework
☆7,632Updated last month
Alternatives and similar repositories for mlx-examples
Users that are interested in mlx-examples are comparing it to the libraries listed below
Sorting:
- MLX: An array framework for Apple silicon☆21,361Updated this week
- An Extensible Deep Learning Library☆2,158Updated this week
- An all-in-one LLMs Chat UI for Apple Silicon Mac using MLX Framework.☆1,567Updated 10 months ago
- lightweight, standalone C++ inference engine for Google's Gemma models.☆6,491Updated last week
- Tensor library for machine learning☆12,808Updated this week
- Python bindings for llama.cpp☆9,313Updated this week
- Inference Llama 2 in one file of pure C☆18,526Updated 11 months ago
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,011Updated 3 months ago
- High-speed Large Language Model Serving for Local Deployment☆8,231Updated 4 months ago
- Minimal, clean code for the Byte Pair Encoding (BPE) algorithm commonly used in LLM tokenization.☆9,731Updated last year
- Run PyTorch LLMs locally on servers, desktop and mobile☆3,597Updated this week
- ☆8,634Updated 9 months ago
- llama and other large language models on iOS and MacOS offline using GGML library.☆1,811Updated 4 months ago
- On-device Speech Recognition for Apple Silicon☆4,794Updated 2 weeks ago
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,631Updated last year
- CoreNet: A library for training deep neural networks☆7,013Updated 2 months ago
- MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac using MLX.☆1,470Updated this week
- The official PyTorch implementation of Google's Gemma models☆5,496Updated last month
- Large Language Model Text Generation Inference☆10,311Updated this week
- PyTorch native post-training library☆5,306Updated this week
- Enchanted is iOS and macOS app for chatting with private self hosted language models such as Llama2, Mistral or Vicuna using Ollama.☆5,449Updated 3 months ago
- Run Mixtral-8x7B models in Colab or consumer desktops☆2,310Updated last year
- Go ahead and axolotl questions☆9,852Updated this week
- Official inference library for Mistral models☆10,338Updated 3 months ago
- LLM training in simple, raw C/CUDA☆27,075Updated 2 weeks ago
- Run LLMs with MLX☆1,276Updated this week
- A MLX port of FLUX based on the Huggingface Diffusers implementation.☆1,463Updated this week
- ☆3,886Updated last year
- On-device AI across mobile, embedded and edge for PyTorch☆3,012Updated this week
- Blazingly fast LLM inference.☆5,849Updated this week