jbarrow / mlx-playgroundLinks
mlx implementations of various transformers, speedups, training
☆33Updated last year
Alternatives and similar repositories for mlx-playground
Users that are interested in mlx-playground are comparing it to the libraries listed below
Sorting:
- Karpathy's llama2.c transpiled to MLX for Apple Silicon☆15Updated last year
- For inferring and serving local LLMs using the MLX framework☆107Updated last year
- run embeddings in MLX☆90Updated 10 months ago
- Full finetuning of large language models without large memory requirements☆94Updated last year
- ☆115Updated 7 months ago
- inference code for mixtral-8x7b-32kseqlen☆101Updated last year
- Scripts to create your own moe models using mlx☆90Updated last year
- Implementation of nougat that focuses on processing pdf locally.☆81Updated 6 months ago
- Port of Andrej Karpathy's nanoGPT to Apple MLX framework.☆111Updated last year
- Minimal, clean code implementation of RAG with mlx using gguf model weights☆52Updated last year
- Let's create synthetic textbooks together :)☆75Updated last year
- Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon☆270Updated 10 months ago
- Very basic framework for composable parameterized large language model (Q)LoRA / (Q)Dora fine-tuning using mlx, mlx_lm, and OgbujiPT.☆42Updated last month
- Fast parallel LLM inference for MLX☆203Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- Just a bunch of benchmark logs for different LLMs☆119Updated last year
- A simple MLX implementation for pretraining LLMs on Apple Silicon.☆82Updated 3 months ago
- ☆157Updated last year
- A simple UI / Web / Frontend for MLX mlx-lm using Streamlit.☆258Updated last month
- Grammar checker with a keyboard shortcut for Ollama and Apple MLX with Automator on macOS.☆82Updated last year
- Explore a simple example of utilizing MLX for RAG application running locally on your Apple Silicon device.☆172Updated last year
- ☆38Updated last year
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆222Updated last year
- Benchmarks comparing PyTorch and MLX on Apple Silicon GPUs☆88Updated last year
- ☆66Updated last year
- LLaVA server (llama.cpp).☆181Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆175Updated last year
- A framework for evaluating function calls made by LLMs☆37Updated last year
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆232Updated 9 months ago
- Distributed Inference for mlx LLm☆94Updated last year