Distributed inference for mobile, desktop and server.
☆2,965Mar 18, 2026Updated this week
Alternatives and similar repositories for cake
Users that are interested in cake are comparing it to the libraries listed below
Sorting:
- Run frontier AI locally.☆42,639Updated this week
- Fast, flexible LLM inference☆6,713Updated this week
- Minimalist ML framework for Rust☆19,735Updated this week
- Claude Engineer is an interactive command-line interface (CLI) that leverages the power of Anthropic's Claude-3.5-Sonnet model to assist …☆11,161Dec 12, 2024Updated last year
- Efficent platform for inference and serving local LLMs including an OpenAI compatible API server.☆614Updated this week
- Burn is a next generation tensor library and Deep Learning Framework that doesn't compromise on flexibility, efficiency and portability.☆14,679Updated this week
- Distributed LLM inference. Connect home devices into a powerful cluster to accelerate LLM inference. More devices means faster inference.☆2,865Feb 10, 2026Updated last month
- screenpipe turns your computer into a personal AI that knows everything you've done. record. search. automate. all local, all private, al…☆17,236Updated this week
- [Unmaintained, see README] An ecosystem of Rust libraries for working with large language models☆6,152Jun 24, 2024Updated last year
- Universal memory layer for AI Agents☆50,147Updated this week
- Universal LLM Deployment Engine with ML Compilation☆22,246Updated this week
- Distribute and run LLMs with a single file.☆23,794Mar 14, 2026Updated last week
- 🔍 An LLM-based Multi-agent Framework of Web Search Engine (like Perplexity.ai Pro and SearchGPT)☆6,805Jul 4, 2025Updated 8 months ago
- Qdrant - High-performance, massive-scale Vector Database and Vector Search Engine for the next generation of AI. Also available in the cl…☆29,611Updated this week
- High-speed Large Language Model Serving for Local Deployment☆8,834Jan 24, 2026Updated last month
- LLM inference in C/C++☆98,098Updated this week
- A lightweight library for portable low-level GPU computation using WebGPU.☆3,954Oct 8, 2025Updated 5 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆73,479Updated this week
- A library for building fast, reliable and evolvable network services.☆26,269Mar 13, 2026Updated last week
- Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek, Qwen, Llama, Gemma, TTS 2x faster with 70% less VRAM.☆54,096Updated this week
- Ingest, parse, and optimize any data format ➡️ from documents to multimedia ➡️ for enhanced compatibility with GenAI frameworks☆6,809Dec 12, 2025Updated 3 months ago
- Apache OpenDAL: One Layer, All Storage.☆4,951Updated this week
- SGLang is a high-performance serving framework for large language models and multimodal models.☆24,829Updated this week
- A modular graph-based Retrieval-Augmented Generation (RAG) system☆31,474Mar 15, 2026Updated last week
- Moshi is a speech-text foundation model and full-duplex spoken dialogue framework. It uses Mimi, a state-of-the-art streaming neural audi…☆9,832Mar 4, 2026Updated 2 weeks ago
- Tantivy is a full-text search engine library inspired by Apache Lucene and written in Rust☆14,740Mar 13, 2026Updated last week
- An open-source RAG-based tool for chatting with your documents.☆25,205Mar 8, 2026Updated 2 weeks ago
- A Gemini 2.5 Flash Level MLLM for Vision, Speech, and Full-Duplex Multimodal Live Streaming on Your Phone☆24,144Mar 7, 2026Updated 2 weeks ago
- The easiest & fastest way to run customized and fine-tuned LLMs locally or on the edge☆1,614Feb 8, 2026Updated last month
- LLM training in simple, raw C/CUDA☆29,216Jun 26, 2025Updated 8 months ago
- Self-hosted AI coding assistant☆33,022Mar 2, 2026Updated 2 weeks ago
- Get up and running with Kimi-K2.5, GLM-5, MiniMax, DeepSeek, gpt-oss, Qwen, Gemma and other models.☆165,557Updated this week
- Inference Llama 2 in one file of pure C☆19,262Aug 6, 2024Updated last year
- Together Mixture-Of-Agents (MoA) – 65.1% on AlpacaEval with OSS models☆2,872Jan 7, 2025Updated last year
- 🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading☆10,020Sep 7, 2024Updated last year
- ⚙️🦀 Build modular and scalable LLM Applications in Rust☆6,528Mar 13, 2026Updated last week
- Open-source LLM load balancer and serving platform for self-hosting LLMs at scale 🏓🦙 Alternative to projects like llm-d, Docker Model R…☆1,483Updated this week
- SOTA Open Source TTS☆27,364Mar 13, 2026Updated last week
- An LLM-powered knowledge curation system that researches a topic and generates a full-length report with citations.☆28,006Sep 30, 2025Updated 5 months ago