b4rtaz / distributed-llamaLinks
Connect home devices into a powerful cluster to accelerate LLM inference. More devices means faster inference.
☆2,074Updated last month
Alternatives and similar repositories for distributed-llama
Users that are interested in distributed-llama are comparing it to the libraries listed below
Sorting:
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,202Updated this week
- Large-scale LLM inference engine☆1,440Updated last week
- Pure C++ implementation of several models for real-time chatting on your computer (CPU & GPU)☆614Updated last week
- Stateful load balancer custom-tailored for llama.cpp 🏓🦙☆767Updated last week
- Local AI API Platform☆2,715Updated 3 weeks ago
- Llama 2 Everywhere (L2E)☆1,517Updated 4 months ago
- Model swapping for llama.cpp (or any local OpenAPI compatible server)☆848Updated last week
- VS Code extension for LLM-assisted code/text completion☆778Updated last week
- Blazingly fast LLM inference.☆5,670Updated this week
- WebAssembly binding for llama.cpp - Enabling on-browser LLM inference☆737Updated last month
- Lightweight inference library for ONNX files, written in C++. It can run Stable Diffusion XL 1.0 on a RPI Zero 2 (or in 298MB of RAM) but…☆1,945Updated 3 weeks ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,878Updated last year
- The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM …☆567Updated 3 months ago
- Stable Diffusion and Flux in pure C/C++☆4,129Updated 2 months ago
- llama.cpp fork with additional SOTA quants and improved performance☆519Updated this week
- ☆2,952Updated 8 months ago
- ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Pl…☆2,170Updated 7 months ago
- INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model☆1,525Updated 2 months ago
- Python bindings for llama.cpp☆9,193Updated 3 weeks ago
- The official API server for Exllama. OAI compatible, lightweight, and fast.☆969Updated this week
- Distributed LLM and StableDiffusion inference for mobile, desktop and server.☆2,854Updated 7 months ago
- Implementation for MatMul-free LM.☆3,006Updated 7 months ago
- ☆1,040Updated 2 weeks ago
- Llama-3 agents that can browse the web by following instructions and talking to you☆1,405Updated 5 months ago
- MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac using MLX.☆1,314Updated this week
- Distributed Training Over-The-Internet☆935Updated 3 weeks ago
- MobileLLM Optimizing Sub-billion Parameter Language Models for On-Device Use Cases. In ICML 2024.☆1,301Updated last month
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆1,441Updated this week
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆4,857Updated last month
- NVIDIA Linux open GPU with P2P support☆1,159Updated last week