exo-explore / mlx-bitnet
1.58 Bit LLM on Apple Silicon using MLX
☆191Updated 10 months ago
Alternatives and similar repositories for mlx-bitnet:
Users that are interested in mlx-bitnet are comparing it to the libraries listed below
- Fast parallel LLM inference for MLX☆173Updated 8 months ago
- Distributed Inference for mlx LLm☆84Updated 7 months ago
- MLX-Embeddings is the best package for running Vision and Language Embedding models locally on your Mac using MLX.☆105Updated 4 months ago
- 1.58-bit LLaMa model☆82Updated 11 months ago
- Port of Andrej Karpathy's nanoGPT to Apple MLX framework.☆105Updated last year
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆256Updated this week
- Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon☆261Updated 6 months ago
- MLX port for xjdr's entropix sampler (mimics jax implementation)☆63Updated 4 months ago
- FastMLX is a high performance production ready API to host MLX models.☆268Updated this week
- Benchmarks comparing PyTorch and MLX on Apple Silicon GPUs☆75Updated 7 months ago
- Scripts to create your own moe models using mlx☆89Updated last year
- Explore a simple example of utilizing MLX for RAG application running locally on your Apple Silicon device.☆162Updated last year
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated 4 months ago
- MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically designed for Apple Silicon (M-series) chips. I…☆268Updated last week
- For inferring and serving local LLMs using the MLX framework☆96Updated 11 months ago
- ☆144Updated 2 months ago
- Port of Suno's Bark TTS transformer in Apple's MLX Framework☆75Updated last year
- Start a server from the MLX library.☆179Updated 7 months ago
- ☆200Updated last month
- ☆152Updated 7 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆196Updated 7 months ago
- ☆111Updated 2 months ago
- Very basic framework for composable parameterized large language model (Q)LoRA / (Q)Dora fine-tuning using mlx, mlx_lm, and OgbujiPT.☆37Updated 2 weeks ago
- automatically quant GGUF models☆160Updated this week
- ☆22Updated 5 months ago
- An implementation of bucketMul LLM inference☆215Updated 8 months ago
- A python package for serving LLM on OpenAI-compatible API endpoints with prompt caching using MLX.☆73Updated 3 months ago
- A collection of optimizers for MLX☆32Updated last week