exo-explore / mlx-bitnet
1.58 Bit LLM on Apple Silicon using MLX
☆184Updated 9 months ago
Alternatives and similar repositories for mlx-bitnet:
Users that are interested in mlx-bitnet are comparing it to the libraries listed below
- Fast parallel LLM inference for MLX☆163Updated 7 months ago
- Distributed Inference for mlx LLm☆82Updated 6 months ago
- Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon☆260Updated 5 months ago
- 1.58-bit LLaMa model☆82Updated 10 months ago
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆252Updated last week
- MLX-Embeddings is the best package for running Vision and Language Embedding models locally on your Mac using MLX.☆96Updated 4 months ago
- FastMLX is a high performance production ready API to host MLX models.☆260Updated 2 months ago
- Scripts to create your own moe models using mlx☆86Updated 11 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆191Updated 7 months ago
- MLX port for xjdr's entropix sampler (mimics jax implementation)☆63Updated 3 months ago
- Inference of Mamba models in pure C☆183Updated 11 months ago
- ☆111Updated 2 months ago
- ☆152Updated 7 months ago
- Train your own small bitnet model☆64Updated 4 months ago
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated 4 months ago
- ☆123Updated 6 months ago
- llama.cpp fork with additional SOTA quants and improved performance☆155Updated this week
- Start a server from the MLX library.☆173Updated 6 months ago
- A simple UI / Web / Frontend for MLX mlx-lm using Streamlit.☆242Updated 3 weeks ago
- ☆136Updated last month
- Testing LLM reasoning abilities with family relationship quizzes.☆57Updated 3 weeks ago
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆222Updated 9 months ago
- PyTorch implementation of models from the Zamba2 series.☆176Updated 3 weeks ago
- run embeddings in MLX☆82Updated 4 months ago
- MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically designed for Apple Silicon (M-series) chips. I…☆255Updated this week
- For inferring and serving local LLMs using the MLX framework☆94Updated 10 months ago
- Benchmarks comparing PyTorch and MLX on Apple Silicon GPUs☆72Updated 7 months ago
- Video+code lecture on building nanoGPT from scratch☆65Updated 8 months ago
- automatically quant GGUF models☆154Updated this week