1.58 Bit LLM on Apple Silicon using MLX
☆274May 10, 2024Updated last year
Alternatives and similar repositories for mlx-bitnet
Users that are interested in mlx-bitnet are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Supporting code for "LLMs for your iPhone: Whole-Tensor 4 Bit Quantization"☆11Mar 31, 2024Updated 2 years ago
- Very basic framework for composable parameterized large language model (Q)LoRA / (Q)Dora fine-tuning using mlx, mlx_lm, and OgbujiPT.☆42Jun 20, 2025Updated 10 months ago
- Experimental BitNet Implementation☆74Nov 27, 2025Updated 5 months ago
- A tiny server to run local inference on MLX model in the style of OpenAI☆13Jan 31, 2024Updated 2 years ago
- Distributed Inference for mlx LLm☆101Aug 1, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆286Jun 16, 2025Updated 10 months ago
- A simple UI / Web / Frontend for MLX mlx-lm using Streamlit.☆263Oct 25, 2025Updated 6 months ago
- Fast parallel LLM inference for MLX☆249Jul 7, 2024Updated last year
- ☆10Nov 16, 2024Updated last year
- import documents for LLMs☆48Mar 30, 2026Updated last month
- Code for the examples presented in the talk "Training a Llama in your backyard: fine-tuning very large models on consumer hardware" given…☆15Oct 16, 2023Updated 2 years ago
- EXO Gym is an open-source Python toolkit that facilitates distributed AI research.☆106Dec 1, 2025Updated 5 months ago
- An all-in-one LLMs Chat UI for Apple Silicon Mac using MLX Framework.☆1,595Sep 6, 2024Updated last year
- 📋 NotebookMLX - An Open Source version of NotebookLM (Ported NotebookLlama)☆344Mar 3, 2025Updated last year
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- Run frontier AI locally.☆44,293Updated this week
- Implementation of F5-TTS in Swift using MLX☆90Dec 11, 2024Updated last year
- Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" in pytorch☆1,924Apr 27, 2026Updated last week
- Large Language Models (LLMs) applications and tools running on Apple Silicon in real-time with Apple MLX.☆463Jan 29, 2025Updated last year
- Minimal Claude Code alternative powered by MLX☆46Jan 11, 2026Updated 3 months ago
- Implementation of the Mamba SSM with hf_integration.☆55Aug 31, 2024Updated last year
- LLM training in simple, raw C/Metal Shading Language☆63Apr 24, 2024Updated 2 years ago
- A fast minimalistic implementation of guided generation on Apple Silicon using Outlines and MLX☆59Feb 9, 2024Updated 2 years ago
- Examples in the MLX framework☆8,545Apr 6, 2026Updated 3 weeks ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- It's a baby compiler. (Lean btw.)☆16May 19, 2025Updated 11 months ago
- The easiest way to run the fastest MLX-based LLMs locally☆323Oct 30, 2024Updated last year
- FlashAttention (Metal Port)☆599Sep 22, 2024Updated last year
- Your gateway to both Ollama & Apple MlX models☆152Mar 2, 2025Updated last year
- Efficient framework-agnostic data loading☆473Oct 1, 2025Updated 7 months ago
- Tiny ASIC implementation for "The Era of 1-bit LLMs All Large Language Models are in 1.58 Bits" matrix multiplication unit☆189Apr 19, 2024Updated 2 years ago
- MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically designed for Apple Silicon (M-series) chips. I…☆708Mar 10, 2026Updated last month
- MLX-Embeddings is the best package for running Vision and Language Embedding models locally on your Mac using MLX.☆360Apr 24, 2026Updated last week
- MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac using MLX.☆4,573Updated this week
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- ☆11Jul 17, 2023Updated 2 years ago
- ☆17May 8, 2024Updated last year
- Joint speech-language model - respond directly to audio!☆372Jul 1, 2024Updated last year
- TerDiT: Ternary Diffusion Models with Transformers☆74Jun 17, 2024Updated last year
- A reinforcement learning framework based on MLX.☆254Dec 1, 2025Updated 5 months ago
- Examples for using the SiLLM framework for training and running Large Language Models (LLMs) on Apple Silicon☆16May 8, 2025Updated 11 months ago
- ☆10May 6, 2024Updated last year