lucasjinreal / CraneLinks
A Pure Rust based LLM (Any LLM based MLLM such as Spark-TTS) Inference Engine, powering by Candle framework.
☆222Updated last week
Alternatives and similar repositories for Crane
Users that are interested in Crane are comparing it to the libraries listed below
Sorting:
- A Fish Speech implementation in Rust, with Candle.rs☆106Updated 7 months ago
- The Easiest Rust Interface for Local LLMs and an Interface for Deterministic Signals from Probabilistic LLM Vibes☆240Updated 5 months ago
- Rust bindings to https://github.com/k2-fsa/sherpa-onnx☆269Updated 2 months ago
- ☆434Updated this week
- 🔥🔥 Kokoro in Rust. https://huggingface.co/hexgrad/Kokoro-82M Insanely fast, realtime TTS with high quality you ever have.☆677Updated this week
- Blazingly fast inference of diffusion models.☆118Updated 9 months ago
- Efficent platform for inference and serving local LLMs including an OpenAI compatible API server.☆561Updated this week
- High-level, optionally asynchronous Rust bindings to llama.cpp☆240Updated last year
- A simple, CUDA or CPU powered, library for creating vector embeddings using Candle and models from Hugging Face☆46Updated last year
- An educational Rust project for exporting and running inference on Qwen3 LLM family☆38Updated 5 months ago
- git-like rag pipeline☆251Updated 2 weeks ago
- Rust bindings for OpenNMT/CTranslate2☆49Updated this week
- TTS support with GGML☆209Updated 3 months ago
- Unofficial Rust bindings to Apple's mlx framework☆230Updated 3 weeks ago
- Candle Pipelines provides a simple, intuitive interface for Rust developers who want to work with Large Language Models locally, powered …☆21Updated last week
- Low rank adaptation (LoRA) for Candle.☆169Updated 8 months ago
- Fast serverless LLM inference, in Rust.☆108Updated 2 months ago
- Implementation of the RWKV language model in pure WebGPU/Rust.☆333Updated 2 months ago
- TensorRT-LLM server with Structured Outputs (JSON) built with Rust☆65Updated 8 months ago
- A comprehensive Rust translation of the code from Sebastian Raschka's Build an LLM from Scratch book.☆284Updated this week
- InferX: Inference as a Service Platform☆146Updated last week
- Implementing the BitNet model in Rust☆43Updated last year
- Kheish: A multi-role LLM agent for tasks like code auditing, file searching, and more seamlessly leveraging RAG and extensible modules.☆142Updated last year
- LLama.cpp rust bindings☆410Updated last year
- Local Qwen3 LLM inference. One easy-to-understand file of C source with no dependencies.☆150Updated 6 months ago
- A single-binary, GPU-accelerated LLM server (HTTP and WebSocket API) written in Rust☆79Updated last year
- Liquid Audio - Speech-to-Speech audio models by Liquid AI☆331Updated this week
- A Rust implementation of OpenAI's Whisper model using the burn framework☆339Updated last year
- Library for doing RAG☆80Updated last week
- Use piper TTS models in Rust☆45Updated last year