evilsocket / cakeLinks
Distributed LLM and StableDiffusion inference for mobile, desktop and server.
☆2,881Updated 10 months ago
Alternatives and similar repositories for cake
Users that are interested in cake are comparing it to the libraries listed below
Sorting:
- Blazingly fast LLM inference.☆6,049Updated this week
- Distributed LLM inference. Connect home devices into a powerful cluster to accelerate LLM inference. More devices means faster inference.☆2,351Updated 2 weeks ago
- The easiest & fastest way to run customized and fine-tuned LLMs locally or on the edge☆1,485Updated last week
- Run PyTorch LLMs locally on servers, desktop and mobile☆3,607Updated 3 weeks ago
- Fast and accurate automatic speech recognition (ASR) for edge devices☆2,844Updated 3 months ago
- Together Mixture-Of-Agents (MoA) – 65.1% on AlpacaEval with OSS models☆2,811Updated 7 months ago
- SCUDA is a GPU over IP bridge allowing GPUs on remote machines to be attached to CPU-only machines.☆1,750Updated 2 months ago
- Local AI API Platform☆2,760Updated last month
- Deep learning at the speed of light.☆2,415Updated this week
- g1: Using Llama-3.1 70b on Groq to create o1-like reasoning chains☆4,222Updated 7 months ago
- RAG (Retrieval Augmented Generation) Framework for building modular, open source applications for production by TrueFoundry☆4,196Updated this week
- A language model programming library.☆5,812Updated 2 months ago
- Moshi is a speech-text foundation model and full-duplex spoken dialogue framework. It uses Mimi, a state-of-the-art streaming neural audi…☆8,825Updated last week
- tiny vision language model☆8,374Updated 2 weeks ago
- AICI: Prompts as (Wasm) Programs☆2,046Updated 7 months ago
- LLocalSearch is a completely locally running search aggregator using LLM Agents. The user can ask a question and the system will use a ch…☆5,947Updated 4 months ago
- A self-organizing file system with llama 3☆5,391Updated 3 weeks ago
- A fast llama2 decoder in pure Rust.☆1,055Updated last year
- Local realtime voice AI☆2,355Updated 6 months ago
- MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac using MLX.☆1,610Updated this week
- A vector search SQLite extension that runs anywhere!☆6,050Updated 7 months ago
- Run your own AI cluster at home with everyday devices 📱💻 🖥️⌚☆30,565Updated 5 months ago
- Open-source LLMOps platform for hosting and scaling AI in your own infrastructure 🏓🦙☆1,276Updated 2 weeks ago
- AIOS: AI Agent Operating System☆4,565Updated 3 weeks ago
- AI app store powered by 24/7 desktop history. open source | 100% local | dev friendly | 24/7 screen, mic recording☆15,530Updated 3 weeks ago
- Minimal LLM inference in Rust☆1,013Updated 10 months ago
- MobileLLM Optimizing Sub-billion Parameter Language Models for On-Device Use Cases. In ICML 2024.☆1,313Updated 4 months ago
- 🔍 An LLM-based Multi-agent Framework of Web Search Engine (like Perplexity.ai Pro and SearchGPT)☆6,579Updated last month
- A framework for serving and evaluating LLM routers - save LLM costs without compromising quality☆4,245Updated last year
- A cross-platform browser ML framework.☆716Updated 9 months ago