Distributed LLM inference. Connect home devices into a powerful cluster to accelerate LLM inference. More devices means faster inference.
☆2,892Feb 10, 2026Updated 2 months ago
Alternatives and similar repositories for distributed-llama
Users that are interested in distributed-llama are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Run frontier AI locally.☆43,503Updated this week
- Distribute and run LLMs with a single file.☆24,121Updated this week
- 🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading☆10,057Sep 7, 2024Updated last year
- Open-source LLM load balancer and serving platform for self-hosting LLMs at scale 🏓🦙 Alternative to projects like llm-d, Docker Model R…☆1,514Apr 3, 2026Updated last week
- Pure C++ implementation of several models for real-time chatting on your computer (CPU & GPU)☆855Apr 3, 2026Updated last week
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- LLM inference in C/C++☆103,237Updated this week
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,493Mar 4, 2026Updated last month
- Distributed inference for mobile, desktop and server.☆3,010Apr 5, 2026Updated last week
- High-speed Large Language Model Serving for Local Deployment☆9,275Jan 24, 2026Updated 2 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆76,536Updated this week
- Universal LLM Deployment Engine with ML Compilation☆22,414Apr 6, 2026Updated last week
- 💡 All-in-one AI framework for semantic search, LLM orchestration and language model workflows☆12,395Updated this week
- Fast, flexible LLM inference☆6,928Updated this week
- ☆135Updated this week
- Deploy open-source AI quickly and easily - Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- Unsloth Studio is a web UI for training and running open models like Gemma 4, Qwen3.5, DeepSeek, gpt-oss locally.☆61,312Updated this week
- Large-scale LLM inference engine☆1,686Mar 12, 2026Updated last month
- Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with cost tracking, guardrails, loadbalancing a…☆42,652Updated this week
- Letta is the platform for building stateful agents: AI with advanced memory that can learn and self-improve over time.☆21,988Apr 8, 2026Updated last week
- LocalAI is the open-source AI engine. Run any model - LLMs, vision, voice, image, video - on any hardware. No GPU required.☆45,386Updated this week
- Inference of Mamba, Mamba2 and Mamba3 models in pure C☆199Mar 18, 2026Updated 3 weeks ago
- High-performance In-browser LLM Inference Engine☆17,740Apr 8, 2026Updated last week
- The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM …☆626Mar 9, 2026Updated last month
- Tensor library for machine learning☆14,394Updated this week
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- Diffusion model(SD,Flux,Wan,Qwen Image,Z-Image,...) inference in pure C/C++☆5,726Updated this week
- WebAssembly binding for llama.cpp - Enabling on-browser LLM inference☆1,029Dec 17, 2025Updated 3 months ago
- Python bindings for llama.cpp☆10,181Updated this week
- Vane is an AI-powered answering engine.☆33,727Updated this week
- Go ahead and axolotl questions☆11,608Apr 8, 2026Updated last week
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.☆13,297Updated this week
- Port of OpenAI's Whisper model in C/C++☆48,405Mar 29, 2026Updated 2 weeks ago
- Tools for merging pretrained large language models.☆6,973Mar 15, 2026Updated 3 weeks ago
- Your agent in your terminal, equipped with local tools: writes code, uses the terminal, browses the web. Make your own persistent autonom…☆4,268Updated this week
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Inference Llama 2 in one file of pure C☆19,379Aug 6, 2024Updated last year
- An innovative library for efficient LLM inference via low-bit quantization☆352Aug 30, 2024Updated last year
- SoTA production-ready AI retrieval system. Agentic Retrieval-Augmented Generation (RAG) with a RESTful API.☆7,764Nov 7, 2025Updated 5 months ago
- The original local LLM interface. Text, vision, tool-calling, training. UI + API, 100% offline and private.☆46,493Updated this week
- Run GGUF models easily with a KoboldAI UI. One File. Zero Install.☆10,074Updated this week
- An open-source RAG-based tool for chatting with your documents.☆25,260Apr 3, 2026Updated last week
- aider is AI pair programming in your terminal☆43,145Updated this week