WasmEdge / mediapipe-rs
The Google mediapipe AI library. Write AI inference applications for image recognition, text classification, audio / video processing and more, in Rust and run them in the secure WasmEdge sandbox. Zero Python dependency!
β153Updated last month
Related projects β
Alternatives and complementary repositories for mediapipe-rs
- A Rust implementation of OpenAI's Whisper model using the burn frameworkβ268Updated 6 months ago
- Inference Llama 2 in one file of pure Rust π¦β228Updated last year
- Low rank adaptation (LoRA) for Candle.β127Updated 2 months ago
- Stable Diffusion XL ported to Rust's burn frameworkβ251Updated 6 months ago
- Stable Diffusion v1.4 ported to Rust's burn frameworkβ316Updated last month
- Moly: an AI LLM GUI app in pure Rustβ150Updated this week
- Llama2 LLM ported to Rust burnβ275Updated 6 months ago
- Rust audio/video engineβ77Updated last year
- Models and examples built with Burnβ182Updated 3 weeks ago
- π¦ π₯οΈ π¦β153Updated 3 months ago
- Rust client for the huggingface hub aiming for minimal subset of features over `huggingface-hub` python packageβ153Updated last month
- ONNX neural network inference engineβ123Updated this week
- Tutorial for Porting PyTorch Transformer Models to Candle (Rust)β249Updated 3 months ago
- π¦Rust + Large Language Models - Make AI Services Freely and Easily.β180Updated 8 months ago
- Rust bindings for OpenVINOβ’β85Updated 3 weeks ago
- Fast, streaming indexing and query library for AI (RAG) applications, written in Rustβ252Updated this week
- An LLM interface (chat bot) implemented in pure Rust using HuggingFace/Candle over Axum Websockets, an SQLite Database, and a Leptos (Wasβ¦β121Updated 3 weeks ago
- Example of tch-rs on M1β50Updated 7 months ago
- Wrap a standalone FFmpeg binary in an intuitive Iterator interface. πβ271Updated this week
- High-level, optionally asynchronous Rust bindings to llama.cppβ176Updated 5 months ago
- A single-binary, GPU-accelerated LLM server (HTTP and WebSocket API) written in Rust