b4rtaz / distributed-llamaLinks
Distributed LLM inference. Connect home devices into a powerful cluster to accelerate LLM inference. More devices means faster inference.
☆2,804Updated last week
Alternatives and similar repositories for distributed-llama
Users that are interested in distributed-llama are comparing it to the libraries listed below
Sorting:
- Local AI API Platform☆2,761Updated 6 months ago
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,426Updated last month
- Reliable model swapping for any local OpenAI/Anthropic compatible server - llama.cpp, vllm, etc☆2,209Updated last week
- Large-scale LLM inference engine☆1,631Updated this week
- Open-source LLM load balancer and serving platform for self-hosting LLMs at scale 🏓🦙☆1,425Updated last week
- Pure C++ implementation of several models for real-time chatting on your computer (CPU & GPU)☆777Updated this week
- SCUDA is a GPU over IP bridge allowing GPUs on remote machines to be attached to CPU-only machines.☆1,800Updated 3 weeks ago
- The official API server for Exllama. OAI compatible, lightweight, and fast.☆1,115Updated this week
- Big & Small LLMs working together☆1,249Updated this week
- Blazingly fast LLM inference.☆6,379Updated this week
- VS Code extension for LLM-assisted code/text completion☆1,135Updated last week
- Distributed LLM and StableDiffusion inference for mobile, desktop and server.☆2,899Updated last year
- WebAssembly binding for llama.cpp - Enabling on-browser LLM inference☆981Updated last month
- Llama 2 Everywhere (L2E)☆1,526Updated 5 months ago
- llama.cpp fork with additional SOTA quants and improved performance☆1,553Updated this week
- An awesome repository of local AI tools☆1,813Updated last year
- The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM …☆612Updated 11 months ago
- RamaLama is an open-source developer tool that simplifies the local serving of AI models from any source and facilitates their use for in…☆2,532Updated last week
- Effortlessly run LLM backends, APIs, frontends, and services with one command.☆2,333Updated this week
- Simple go utility to download HuggingFace Models and Datasets☆813Updated 2 weeks ago
- AirLLM 70B inference with single 4GB GPU☆8,979Updated 4 months ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,906Updated 2 years ago
- Local voice chatbot for engaging conversations, powered by Ollama, Hugging Face Transformers, and Coqui TTS Toolkit☆783Updated last year
- A framework for serving and evaluating LLM routers - save LLM costs without compromising quality☆4,553Updated last year
- Python bindings for the Transformer models implemented in C/C++ using GGML library.☆1,876Updated last year
- Text-To-Speech, RAG, and LLMs. All local!☆1,895Updated last year
- Multiple NVIDIA GPUs or Apple Silicon for Large Language Model Inference?☆1,867Updated last year
- INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model☆1,562Updated 10 months ago
- Compare open-source local LLM inference projects by their metrics to assess popularity and activeness.☆705Updated 2 months ago
- Infinity is a high-throughput, low-latency serving engine for text-embeddings, reranking models, clip, clap and colpali☆2,629Updated last month