willccbb / mlx_parallm
Fast parallel LLM inference for MLX
☆184Updated 9 months ago
Alternatives and similar repositories for mlx_parallm:
Users that are interested in mlx_parallm are comparing it to the libraries listed below
- FastMLX is a high performance production ready API to host MLX models.☆293Updated last month
- Distributed Inference for mlx LLm☆87Updated 8 months ago
- run embeddings in MLX☆86Updated 6 months ago
- ☆112Updated 4 months ago
- Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon☆265Updated 7 months ago
- Scripts to create your own moe models using mlx☆89Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- For inferring and serving local LLMs using the MLX framework☆101Updated last year
- Start a server from the MLX library.☆183Updated 9 months ago
- look how they massacred my boy☆63Updated 6 months ago
- Train your own SOTA deductive reasoning model☆88Updated last month
- ☆66Updated 11 months ago
- ☆153Updated 9 months ago
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆223Updated 11 months ago
- Just a bunch of benchmark logs for different LLMs☆119Updated 8 months ago
- ☆129Updated 8 months ago
- MLX-Embeddings is the best package for running Vision and Language Embedding models locally on your Mac using MLX.☆137Updated this week
- GenAI & agent toolkit for Apple Silicon Mac, implementing JSON schema-steered structured output (3SO) and tool-calling in Python. For mor…☆122Updated 2 months ago
- A simple UI / Web / Frontend for MLX mlx-lm using Streamlit.☆249Updated 2 months ago
- smolLM with Entropix sampler on pytorch☆151Updated 5 months ago
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆262Updated 2 weeks ago
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆139Updated 2 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆171Updated 3 months ago
- ☆150Updated 4 months ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆235Updated 11 months ago
- MLX port for xjdr's entropix sampler (mimics jax implementation)☆64Updated 5 months ago
- smol models are fun too☆92Updated 5 months ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆231Updated 5 months ago
- Low-Rank adapter extraction for fine-tuned transformers models☆171Updated 11 months ago
- Large Language Models (LLMs) applications and tools running on Apple Silicon in real-time with Apple MLX.☆438Updated 2 months ago