willccbb / mlx_parallm
Fast parallel LLM inference for MLX
☆173Updated 8 months ago
Alternatives and similar repositories for mlx_parallm:
Users that are interested in mlx_parallm are comparing it to the libraries listed below
- FastMLX is a high performance production ready API to host MLX models.☆268Updated this week
- Start a server from the MLX library.☆179Updated 7 months ago
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆224Updated 10 months ago
- Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon☆261Updated 6 months ago
- run embeddings in MLX☆82Updated 5 months ago
- smol models are fun too☆89Updated 4 months ago
- look how they massacred my boy☆63Updated 4 months ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated last year
- For inferring and serving local LLMs using the MLX framework☆96Updated 11 months ago
- MLX-Embeddings is the best package for running Vision and Language Embedding models locally on your Mac using MLX.☆105Updated 4 months ago
- Scripts to create your own moe models using mlx☆89Updated last year
- ☆111Updated 2 months ago
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated 4 months ago
- ☆126Updated 6 months ago
- Distributed Inference for mlx LLm☆84Updated 7 months ago
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆135Updated 3 weeks ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆196Updated 7 months ago
- GenAI & agent toolkit for Apple Silicon Mac, implementing JSON schema-steered structured output (3SO) and tool-calling in Python. For mor…☆116Updated 2 weeks ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆233Updated 9 months ago
- ☆152Updated 7 months ago
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆256Updated last week
- Tutorial for building LLM router☆186Updated 7 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆168Updated last month
- 1.58 Bit LLM on Apple Silicon using MLX☆191Updated 10 months ago
- smolLM with Entropix sampler on pytorch☆150Updated 4 months ago
- ☆149Updated 3 months ago
- Explore a simple example of utilizing MLX for RAG application running locally on your Apple Silicon device.☆162Updated last year