Distributed LLM inference. Connect home devices into a powerful cluster to accelerate LLM inference. More devices means faster inference.
☆2,917Apr 14, 2026Updated 2 weeks ago
Alternatives and similar repositories for distributed-llama
Users that are interested in distributed-llama are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Run frontier AI locally.☆44,293Updated this week
- Distribute and run LLMs with a single file.☆24,349Updated this week
- 🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading☆10,109Sep 7, 2024Updated last year
- Open-source LLM/VLM load balancer and serving platform for self-hosting LLMs (and VLMs) at scale 🏓🦙 Alternative to projects like llm-d,…☆1,540Updated this week
- Pure C++ implementation of several models for real-time chatting on your computer (CPU & GPU)☆872Apr 3, 2026Updated last month
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- LLM inference in C/C++☆107,892Updated this week
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,511Mar 4, 2026Updated 2 months ago
- Distributed inference for mobile, desktop and server.☆3,027Apr 24, 2026Updated last week
- High-speed Large Language Model Serving for Local Deployment☆9,390Jan 24, 2026Updated 3 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆78,979Updated this week
- Universal LLM Deployment Engine with ML Compilation☆22,557Apr 22, 2026Updated last week
- Fast, flexible LLM inference☆7,074Apr 15, 2026Updated 2 weeks ago
- 💡 All-in-one AI framework for semantic search, LLM orchestration and language model workflows☆12,453Updated this week
- ☆136Apr 8, 2026Updated 3 weeks ago
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Web UI for training and running open models like Gemma 4, Qwen3.6, DeepSeek, gpt-oss locally.☆63,536Updated this week
- Large-scale LLM inference engine☆1,714Updated this week
- Inference of Mamba, Mamba2 and Mamba3 models in pure C☆200Mar 18, 2026Updated last month
- Letta is the platform for building stateful agents: AI with advanced memory that can learn and self-improve over time.☆22,391Apr 12, 2026Updated 3 weeks ago
- Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with cost tracking, guardrails, loadbalancing a…☆45,153Updated this week
- LocalAI is the open-source AI engine. Run any model - LLMs, vision, voice, image, video - on any hardware. No GPU required.☆46,040Updated this week
- High-performance In-browser LLM Inference Engine☆17,858Apr 24, 2026Updated last week
- The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM …☆630Mar 9, 2026Updated last month
- Diffusion model(SD,Flux,Wan,Qwen Image,Z-Image,...) inference in pure C/C++☆5,894Updated this week
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- Tensor library for machine learning☆14,560Updated this week
- WebAssembly binding for llama.cpp - Enabling on-browser LLM inference☆1,048Apr 28, 2026Updated last week
- Python bindings for llama.cpp☆10,264Updated this week
- Vane is an AI-powered answering engine.☆34,125Apr 11, 2026Updated 3 weeks ago
- Go ahead and axolotl questions☆11,779Apr 27, 2026Updated last week
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.☆13,337Updated this week
- Port of OpenAI's Whisper model in C/C++☆49,148Apr 20, 2026Updated 2 weeks ago
- Tools for merging pretrained large language models.☆7,052Mar 15, 2026Updated last month
- Your agent in your terminal, equipped with local tools: writes code, uses the terminal, browses the web. Make your own persistent autonom…☆4,287Updated this week
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- An innovative library for efficient LLM inference via low-bit quantization☆352Aug 30, 2024Updated last year
- Inference Llama 2 in one file of pure C☆19,460Aug 6, 2024Updated last year
- SoTA production-ready AI retrieval system. Agentic Retrieval-Augmented Generation (RAG) with a RESTful API.☆7,795Nov 7, 2025Updated 5 months ago
- Open-source desktop app for local LLMs. Text, vision, tool-calling, OpenAI/Anthropic-compatible API.☆46,931Updated this week
- Run GGUF models easily with a KoboldAI UI. One File. Zero Install.☆10,387Updated this week
- An open-source RAG-based tool for chatting with your documents.☆25,350Apr 3, 2026Updated last month
- aider is AI pair programming in your terminal☆44,187Apr 25, 2026Updated last week