huggingface / ratchet
A cross-platform browser ML framework.
☆658Updated 2 months ago
Alternatives and similar repositories for ratchet:
Users that are interested in ratchet are comparing it to the libraries listed below
- High-level, optionally asynchronous Rust bindings to llama.cpp☆204Updated 8 months ago
- WebAssembly binding for llama.cpp - Enabling on-browser LLM inference☆582Updated last week
- LLama.cpp rust bindings☆371Updated 7 months ago
- 🕸️🦀 A WASM vector similarity search written in Rust☆919Updated last year
- ☆216Updated this week
- A Rust implementation of OpenAI's Whisper model using the burn framework☆290Updated 9 months ago
- Efficent platform for inference and serving local LLMs including an OpenAI compatible API server.☆306Updated this week
- LLM Orchestrator built in Rust☆272Updated 11 months ago
- ☆136Updated last year
- Rust bindings to https://github.com/ggerganov/whisper.cpp☆759Updated this week
- Rust library for generating vector embeddings, reranking locally☆426Updated this week
- A WebGPU-accelerated ONNX inference run-time written 100% in Rust, ready for native and the web☆1,704Updated 7 months ago
- Fast, streaming indexing, query, and agentic LLM applications in Rust☆376Updated this week
- Inference Llama 2 in one file of pure Rust 🦀☆231Updated last year
- 🔥🔥 Kokoro in Rust. https://huggingface.co/hexgrad/Kokoro-82M Insanely fast, realtime TTS with high quality you ever have.☆385Updated this week
- Tutorial for Porting PyTorch Transformer Models to Candle (Rust)☆281Updated 6 months ago
- Rust client for the huggingface hub aiming for minimal subset of features over `huggingface-hub` python package☆181Updated this week
- Split text into semantic chunks, up to a desired chunk size. Supports calculating length by characters and tokens, and is callable from R…☆350Updated this week
- Minimal LLM inference in Rust☆969Updated 3 months ago
- Unofficial Rust bindings to Apple's mlx framework☆123Updated this week
- Web-optimized vector database (written in Rust).☆208Updated last week
- A fast llama2 decoder in pure Rust.☆1,033Updated last year
- Stateful load balancer custom-tailored for llama.cpp 🏓🦙☆706Updated last month
- ⚡ Edgen: Local, private GenAI server alternative to OpenAI. No GPU required. Run AI models locally: LLMs (Llama2, Mistral, Mixtral...), …☆354Updated 8 months ago
- 🦀 A curated list of Rust tools, libraries, and frameworks for working with LLMs, GPT, AI☆350Updated 11 months ago
- Rust+OpenCL+AVX2 implementation of LLaMA inference code☆543Updated last year
- Low rank adaptation (LoRA) for Candle.☆142Updated 6 months ago
- WebAssembly (Wasm) Build and Bindings for llama.cpp☆235Updated 6 months ago
- A tiny embedding database in pure Rust.☆394Updated last year
- Ready-made tokenizer library for working with GPT and tiktoken☆289Updated last month