willccbb / mlx_parallm
Fast parallel LLM inference for MLX
☆177Updated 8 months ago
Alternatives and similar repositories for mlx_parallm:
Users that are interested in mlx_parallm are comparing it to the libraries listed below
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆224Updated 11 months ago
- smol models are fun too☆91Updated 4 months ago
- For inferring and serving local LLMs using the MLX framework☆99Updated last year
- ☆111Updated 3 months ago
- Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon☆264Updated 6 months ago
- Train your own SOTA deductive reasoning model☆81Updated 3 weeks ago
- model activation visualiser☆90Updated this week
- Start a server from the MLX library.☆182Updated 8 months ago
- A comprehensive repository of reasoning tasks for LLMs (and beyond)☆426Updated 6 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆168Updated 2 months ago
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆138Updated last month
- ☆126Updated 7 months ago
- Scripts to create your own moe models using mlx☆89Updated last year
- look how they massacred my boy☆63Updated 5 months ago
- Distributed Inference for mlx LLm☆87Updated 8 months ago
- run embeddings in MLX☆84Updated 6 months ago
- FastMLX is a high performance production ready API to host MLX models.☆283Updated 2 weeks ago
- ☆136Updated last year
- ☆66Updated 10 months ago
- Just a bunch of benchmark logs for different LLMs☆119Updated 8 months ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆233Updated 10 months ago
- smolLM with Entropix sampler on pytorch☆151Updated 5 months ago
- Solving data for LLMs - Create quality synthetic datasets!☆145Updated 2 months ago
- ☆152Updated 8 months ago
- MLX-Embeddings is the best package for running Vision and Language Embedding models locally on your Mac using MLX.☆121Updated this week
- ☆150Updated 4 months ago
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆260Updated 2 weeks ago
- Explore a simple example of utilizing MLX for RAG application running locally on your Apple Silicon device.☆163Updated last year
- 1.58 Bit LLM on Apple Silicon using MLX☆194Updated 10 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆195Updated 8 months ago