1.58 Bit LLM on Apple Silicon using MLX
☆255May 10, 2024Updated last year
Alternatives and similar repositories for mlx-bitnet
Users that are interested in mlx-bitnet are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Supporting code for "LLMs for your iPhone: Whole-Tensor 4 Bit Quantization"☆11Mar 31, 2024Updated last year
- Very basic framework for composable parameterized large language model (Q)LoRA / (Q)Dora fine-tuning using mlx, mlx_lm, and OgbujiPT.☆43Jun 20, 2025Updated 9 months ago
- Lossless Scaling is a PC utility that enhances image quality and frame rates in games using upscaling and frame generation algorithms☆188Updated this week
- Experimental BitNet Implementation☆74Nov 27, 2025Updated 3 months ago
- A tiny server to run local inference on MLX model in the style of OpenAI☆13Jan 31, 2024Updated 2 years ago
- Distributed Inference for mlx LLm☆100Aug 1, 2024Updated last year
- REAP expert pruning for MoE LLMs on Apple Silicon via MLX☆49Mar 16, 2026Updated last week
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆286Jun 16, 2025Updated 9 months ago
- A simple UI / Web / Frontend for MLX mlx-lm using Streamlit.☆262Oct 25, 2025Updated 4 months ago
- Fast parallel LLM inference for MLX☆249Jul 7, 2024Updated last year
- ☆10Nov 16, 2024Updated last year
- MLX Transformers is a library that provides model implementation in MLX. It uses a similar model interface as HuggingFace Transformers an…☆75Nov 19, 2024Updated last year
- import documents for LLMs☆47Jan 19, 2025Updated last year
- Code for the examples presented in the talk "Training a Llama in your backyard: fine-tuning very large models on consumer hardware" given…☆15Oct 16, 2023Updated 2 years ago
- An all-in-one LLMs Chat UI for Apple Silicon Mac using MLX Framework.☆1,593Sep 6, 2024Updated last year
- MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac using MLX.☆2,338Updated this week
- 📋 NotebookMLX - An Open Source version of NotebookLM (Ported NotebookLlama)☆339Mar 3, 2025Updated last year
- Unofficial implementation of DreamTalk in ComfyUI☆12Aug 15, 2024Updated last year
- Implementing the BitNet model in Rust☆46Apr 18, 2024Updated last year
- Run frontier AI locally.☆42,805Updated this week
- Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" in pytorch☆1,904Updated this week
- Large Language Models (LLMs) applications and tools running on Apple Silicon in real-time with Apple MLX.☆460Jan 29, 2025Updated last year
- Minimal Claude Code alternative powered by MLX☆46Jan 11, 2026Updated 2 months ago
- Implementation of the Mamba SSM with hf_integration.☆55Aug 31, 2024Updated last year
- Implementation of nougat that focuses on processing pdf locally.☆84Jan 15, 2025Updated last year
- LLM training in simple, raw C/Metal Shading Language☆61Apr 24, 2024Updated last year
- Examples in the MLX framework☆8,375Feb 12, 2026Updated last month
- A simple LLaMA implementation using MLX.☆15Apr 22, 2024Updated last year
- It's a baby compiler. (Lean btw.)☆16May 19, 2025Updated 10 months ago
- The easiest way to run the fastest MLX-based LLMs locally☆319Oct 30, 2024Updated last year
- MLX-Embeddings is the best package for running Vision and Language Embedding models locally on your Mac using MLX.☆317Mar 14, 2026Updated last week
- Tiny ASIC implementation for "The Era of 1-bit LLMs All Large Language Models are in 1.58 Bits" matrix multiplication unit☆181Apr 19, 2024Updated last year
- Efficient framework-agnostic data loading☆463Oct 1, 2025Updated 5 months ago
- MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically designed for Apple Silicon (M-series) chips. I…☆682Mar 10, 2026Updated 2 weeks ago
- Your gateway to both Ollama & Apple MlX models☆153Mar 2, 2025Updated last year
- ☆11Jul 17, 2023Updated 2 years ago
- ☆17May 8, 2024Updated last year
- Joint speech-language model - respond directly to audio!☆373Jul 1, 2024Updated last year
- Run GreenBitAI's Quantized LLMs on Apple Devices with MLX☆31Aug 27, 2025Updated 6 months ago