lucasjinreal / CraneView external linksLinks
A Pure Rust based LLM, VLM, VLA, TTS, OCR Inference Engine, powering by Candle & Rust. Alternate to your llama.cpp but much more simpler and cleaner..
☆247Jan 30, 2026Updated 2 weeks ago
Alternatives and similar repositories for Crane
Users that are interested in Crane are comparing it to the libraries listed below
Sorting:
- Rust standalone inference of Namo-500M series models. Extremly tiny, runing VLM on CPU.☆24Mar 12, 2025Updated 11 months ago
- 🔥🔥 Kokoro in Rust. https://huggingface.co/hexgrad/Kokoro-82M Insanely fast, realtime TTS with high quality you ever have.☆704Jan 20, 2026Updated 3 weeks ago
- ☆24Jan 22, 2025Updated last year
- the rent a hal project for AI☆22Aug 12, 2025Updated 6 months ago
- A Rust 🦀 port of the Hugging Face smolagents library.☆42Mar 26, 2025Updated 10 months ago
- Rust bindings to https://github.com/k2-fsa/sherpa-onnx☆290Nov 1, 2025Updated 3 months ago
- Run Orpheus 3B Locally with Gradio UI, Standalone App☆23Apr 1, 2025Updated 10 months ago
- A simple, CUDA or CPU powered, library for creating vector embeddings using Candle and models from Hugging Face☆47May 3, 2024Updated last year
- ☆64Jun 24, 2025Updated 7 months ago
- ☆18Aug 19, 2025Updated 5 months ago
- A pure and fast NumPy implementation of Mamba with cache support.☆18Jun 16, 2024Updated last year
- AI Assistant☆20Apr 18, 2025Updated 9 months ago
- ☆51Feb 19, 2025Updated 11 months ago
- A simple, "Ollama-like" tool for managing and running GGUF language models from your terminal.☆23Jan 2, 2026Updated last month
- A forward proxy to turn network traffic into personal memory for AI agents☆33Jan 6, 2026Updated last month
- ☆19Jul 4, 2025Updated 7 months ago
- ☆461Updated this week
- Service for testing out the new Qwen2.5 omni model☆63Apr 30, 2025Updated 9 months ago
- Fast, flexible LLM inference☆6,580Updated this week
- ☆178Aug 10, 2025Updated 6 months ago
- LexiCrawler is a powerful Go-based web crawling API meticulously designed to extract, clean, and transform web page content into a pristi…☆48Feb 27, 2025Updated 11 months ago
- The hearth of The Pulsar App, fast, secure and shared inference with modern UI☆59Dec 1, 2024Updated last year
- An educational Rust project for exporting and running inference on Qwen3 LLM family☆40Aug 3, 2025Updated 6 months ago
- An OpenVoice-based voice cloning tool, single executable file (~14M), supporting multiple formats without dependencies on ffmpeg, Python,…☆44Jan 18, 2026Updated 3 weeks ago
- Kotlin library for Cortex.cpp a Local AI API Platform that is used to run and customize LLMs.☆10Apr 2, 2025Updated 10 months ago
- A free and open-source GUI tool that simplifies combining multiple code files into one, with automatic labeling and support for various p…☆14Jan 3, 2025Updated last year
- Fast serverless LLM inference, in Rust.☆110Nov 5, 2025Updated 3 months ago
- LLM inference in C/C++☆23Oct 4, 2024Updated last year
- cli tool to quantize gguf, gptq, awq, hqq and exl2 models☆78Dec 17, 2024Updated last year
- The Easiest Rust Interface for Local LLMs and an Interface for Deterministic Signals from Probabilistic LLM Vibes☆243Aug 6, 2025Updated 6 months ago
- Get aid from local LLMs right in your PowerShell☆15May 2, 2025Updated 9 months ago
- Run GEPA on your favorite non-python libraries.☆32Jan 22, 2026Updated 3 weeks ago
- A thin cython wrapper around llama.cpp, whisper.cpp and stable-diffusion.cpp☆16Updated this week
- A universal adapter including zero-copy Python bindings for Philip Turner's metal flash attention library.☆23Dec 15, 2025Updated 2 months ago
- A Rust-based, SenseVoiceSmall☆23Jan 12, 2026Updated last month
- Yet another frontend for LLM, written using .NET and WinUI 3☆10Sep 14, 2025Updated 5 months ago
- Frontend for Uplink☆12Apr 22, 2025Updated 9 months ago
- Fast ML inference & training for ONNX models in Rust☆1,985Updated this week
- Create text chunks which end at natural stopping points without using a tokenizer☆26Nov 26, 2025Updated 2 months ago