huggingface / ratchet
A cross-platform browser ML framework.
β686Updated 4 months ago
Alternatives and similar repositories for ratchet:
Users that are interested in ratchet are comparing it to the libraries listed below
- WebAssembly binding for llama.cpp - Enabling on-browser LLM inferenceβ662Updated this week
- High-level, optionally asynchronous Rust bindings to llama.cppβ217Updated 10 months ago
- πΈοΈπ¦ A WASM vector similarity search written in Rustβ947Updated last year
- Rust library for generating vector embeddings, reranking locallyβ473Updated last week
- β251Updated this week
- LLama.cpp rust bindingsβ379Updated 9 months ago
- LLM Orchestrator built in Rustβ278Updated last year
- A Rust implementation of OpenAI's Whisper model using the burn frameworkβ301Updated 11 months ago
- Efficent platform for inference and serving local LLMs including an OpenAI compatible API server.β349Updated this week
- Fast, streaming indexing, query, and agentic LLM applications in Rustβ451Updated this week
- A fast llama2 decoder in pure Rust.β1,046Updated last year
- β137Updated last year
- Tutorial for Porting PyTorch Transformer Models to Candle (Rust)β288Updated 8 months ago
- Rust+OpenCL+AVX2 implementation of LLaMA inference codeβ544Updated last year
- π¦ A curated list of Rust tools, libraries, and frameworks for working with LLMs, GPT, AIβ384Updated last year
- A WebGPU-accelerated ONNX inference run-time written 100% in Rust, ready for native and the webβ1,727Updated 8 months ago
- Web-optimized vector database (written in Rust).β226Updated last month
- Inference Llama 2 in one file of pure Rust π¦β233Updated last year
- Rust bindings to https://github.com/k2-fsa/sherpa-onnxβ160Updated 3 weeks ago
- Fast ML inference & training for ONNX models in Rustβ1,257Updated this week
- Stateful load balancer custom-tailored for llama.cpp ππ¦β742Updated 2 weeks ago
- Unofficial Rust bindings to Apple's mlx frameworkβ150Updated last week
- Rust bindings to https://github.com/ggerganov/whisper.cppβ815Updated last week
- The Easiest Rust Interface for Local LLMs and an Interface for Deterministic Signals from Probabilistic LLM Vibesβ195Updated 2 months ago
- Rust client for the huggingface hub aiming for minimal subset of features over `huggingface-hub` python packageβ196Updated last month
- Minimal LLM inference in Rustβ985Updated 5 months ago
- Low rank adaptation (LoRA) for Candle.β142Updated this week
- Production-ready Inference, Ingestion and Indexing built in Rust π¦β512Updated last week
- Split text into semantic chunks, up to a desired chunk size. Supports calculating length by characters and tokens, and is callable from Rβ¦β395Updated this week
- β126Updated 11 months ago