huggingface / ratchetLinks
A cross-platform browser ML framework.
☆696Updated 6 months ago
Alternatives and similar repositories for ratchet
Users that are interested in ratchet are comparing it to the libraries listed below
Sorting:
- WebAssembly binding for llama.cpp - Enabling on-browser LLM inference☆732Updated last month
- High-level, optionally asynchronous Rust bindings to llama.cpp☆222Updated 11 months ago
- Efficent platform for inference and serving local LLMs including an OpenAI compatible API server.☆372Updated this week
- Rust library for generating vector embeddings, reranking.☆523Updated this week
- LLama.cpp rust bindings☆388Updated 11 months ago
- 🕸️🦀 A WASM vector similarity search written in Rust☆963Updated last year
- ☆280Updated this week
- LLM Orchestrator built in Rust☆276Updated last year
- A WebGPU-accelerated ONNX inference run-time written 100% in Rust, ready for native and the web☆1,728Updated 10 months ago
- A Rust implementation of OpenAI's Whisper model using the burn framework☆307Updated last year
- Low rank adaptation (LoRA) for Candle.☆147Updated last month
- ☆136Updated last year
- Rust bindings to https://github.com/ggerganov/whisper.cpp☆847Updated last month
- Minimal LLM inference in Rust☆986Updated 7 months ago
- Rust+OpenCL+AVX2 implementation of LLaMA inference code☆547Updated last year
- Tutorial for Porting PyTorch Transformer Models to Candle (Rust)☆296Updated 10 months ago
- Web-optimized vector database (written in Rust).☆238Updated 3 months ago
- The Easiest Rust Interface for Local LLMs and an Interface for Deterministic Signals from Probabilistic LLM Vibes☆201Updated 3 months ago
- WebAssembly (Wasm) Build and Bindings for llama.cpp☆267Updated 10 months ago
- ☆129Updated last year
- Unofficial Rust bindings to Apple's mlx framework☆157Updated this week
- A fast llama2 decoder in pure Rust.☆1,050Updated last year
- Implementation of the RWKV language model in pure WebGPU/Rust.☆305Updated 2 weeks ago
- OpenAI compatible API for serving LLAMA-2 model☆218Updated last year
- Stateful load balancer custom-tailored for llama.cpp 🏓🦙☆764Updated this week
- Inference Llama 2 in one file of pure Rust 🦀☆232Updated last year
- Fast, streaming indexing, query, and agentic LLM applications in Rust☆492Updated this week
- FastMLX is a high performance production ready API to host MLX models.☆305Updated 2 months ago
- ONNX neural network inference engine☆210Updated this week
- Tensor computation with WebGPU acceleration☆618Updated 10 months ago