Distributed LLM inference. Connect home devices into a powerful cluster to accelerate LLM inference. More devices means faster inference.
☆2,842Feb 10, 2026Updated 2 weeks ago
Alternatives and similar repositories for distributed-llama
Users that are interested in distributed-llama are comparing it to the libraries listed below
Sorting:
- Distribute and run LLMs with a single file.☆23,755Updated this week
- Run frontier AI locally.☆41,955Updated this week
- 🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading☆9,971Sep 7, 2024Updated last year
- Open-source LLM load balancer and serving platform for self-hosting LLMs at scale 🏓🦙 Alternative to projects like llm-d, Docker Model R…☆1,467Updated this week
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,444Dec 9, 2025Updated 2 months ago
- Pure C++ implementation of several models for real-time chatting on your computer (CPU & GPU)☆821Feb 23, 2026Updated last week
- High-speed Large Language Model Serving for Local Deployment☆8,729Jan 24, 2026Updated last month
- Distributed LLM and StableDiffusion inference for mobile, desktop and server.☆2,905Oct 23, 2024Updated last year
- LLM inference in C/C++☆96,322Updated this week
- Universal LLM Deployment Engine with ML Compilation☆22,082Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆71,234Updated this week
- 💡 All-in-one AI framework for semantic search, LLM orchestration and language model workflows☆12,210Feb 22, 2026Updated last week
- Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek, Qwen, Llama, Gemma, TTS 2x faster with 70% less VRAM.☆52,724Updated this week
- Letta is the platform for building stateful agents: AI with advanced memory that can learn and self-improve over time.☆21,340Updated this week
- Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with cost tracking, guardrails, loadbalancing a…☆37,083Updated this week
- Large-scale LLM inference engine☆1,658Feb 17, 2026Updated last week
- Fast, flexible LLM inference☆6,623Updated this week
- High-performance In-browser LLM Inference Engine☆17,456Feb 18, 2026Updated last week
- The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement, running on consumer-g…☆43,070Updated this week
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.☆13,182Feb 22, 2026Updated last week
- Inference of Mamba and Mamba2 models in pure C☆197Jan 22, 2026Updated last month
- Tools for merging pretrained large language models.☆6,814Jan 26, 2026Updated last month
- Tensor library for machine learning☆14,152Updated this week
- Diffusion model(SD,Flux,Wan,Qwen Image,Z-Image,...) inference in pure C/C++☆5,490Updated this week
- WebAssembly binding for llama.cpp - Enabling on-browser LLM inference☆1,003Dec 17, 2025Updated 2 months ago
- The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM …☆615Feb 17, 2025Updated last year
- Go ahead and axolotl questions☆11,335Updated this week
- ☆134Updated this week
- Perplexica is an AI-powered answering engine.☆29,068Feb 13, 2026Updated 2 weeks ago
- Port of OpenAI's Whisper model in C/C++☆47,067Updated this week
- SoTA production-ready AI retrieval system. Agentic Retrieval-Augmented Generation (RAG) with a RESTful API.☆7,693Nov 7, 2025Updated 3 months ago
- Inference Llama 2 in one file of pure C☆19,213Aug 6, 2024Updated last year
- Official inference framework for 1-bit LLMs☆28,640Feb 3, 2026Updated last month
- AirLLM 70B inference with single 4GB GPU☆12,954Sep 3, 2025Updated 5 months ago
- Run GGUF models easily with a KoboldAI UI. One File. Zero Install.☆9,544Updated this week
- An open-source RAG-based tool for chatting with your documents.☆25,168Updated this week
- aider is AI pair programming in your terminal☆41,062Updated this week
- Build, run, manage agentic software at scale.☆38,276Updated this week
- SGLang is a high-performance serving framework for large language models and multimodal models.☆23,905Updated this week