huggingface / ratchetLinks
A cross-platform browser ML framework.
☆725Updated last year
Alternatives and similar repositories for ratchet
Users that are interested in ratchet are comparing it to the libraries listed below
Sorting:
- High-level, optionally asynchronous Rust bindings to llama.cpp☆234Updated last year
- LLama.cpp rust bindings☆408Updated last year
- Efficent platform for inference and serving local LLMs including an OpenAI compatible API server.☆529Updated last week
- A WebGPU-accelerated ONNX inference run-time written 100% in Rust, ready for native and the web☆1,739Updated last year
- ☆397Updated last week
- 🕸️🦀 A WASM vector similarity search written in Rust☆1,007Updated 2 years ago
- LLM Orchestrator built in Rust☆284Updated last year
- Rust library for generating vector embeddings, reranking. Re-write of qdrant/fastembed.☆661Updated this week
- A Rust implementation of OpenAI's Whisper model using the burn framework☆329Updated last year
- WebAssembly binding for llama.cpp - Enabling on-browser LLM inference☆939Updated last month
- Rust bindings to https://github.com/ggerganov/whisper.cpp☆926Updated 3 months ago
- Tutorial for Porting PyTorch Transformer Models to Candle (Rust)☆328Updated last year
- The Easiest Rust Interface for Local LLMs and an Interface for Deterministic Signals from Probabilistic LLM Vibes☆239Updated 3 months ago
- A fast llama2 decoder in pure Rust.☆1,056Updated last year
- Rust+OpenCL+AVX2 implementation of LLaMA inference code☆550Updated last year
- Unofficial Rust bindings to Apple's mlx framework☆208Updated last month
- Inference Llama 2 in one file of pure Rust 🦀☆233Updated 2 years ago
- Rust client for the huggingface hub aiming for minimal subset of features over `huggingface-hub` python package☆242Updated this week
- ☆140Updated last year
- Open-source LLM load balancer and serving platform for self-hosting LLMs at scale 🏓🦙☆1,366Updated last month
- Minimal LLM inference in Rust☆1,022Updated last year
- A Pure Rust based LLM (Any LLM based MLLM such as Spark-TTS) Inference Engine, powering by Candle framework.☆202Updated 2 weeks ago
- Low rank adaptation (LoRA) for Candle.☆168Updated 7 months ago
- Fast ML inference & training for ONNX models in Rust☆1,720Updated last week
- Ready-made tokenizer library for working with GPT and tiktoken☆351Updated 2 weeks ago
- An implementation of the diffusers api in Rust☆580Updated last year
- Rust bindings to https://github.com/k2-fsa/sherpa-onnx☆250Updated 3 weeks ago
- WebAssembly (Wasm) Build and Bindings for llama.cpp☆285Updated last year
- Fast, streaming indexing, query, and agentic LLM applications in Rust☆606Updated last week
- Rust client for Qdrant vector search engine☆353Updated last week