3xMike / tritonserver-rsLinks
Rust crate for easy and efficient ML model inference
☆29Updated 4 months ago
Alternatives and similar repositories for tritonserver-rs
Users that are interested in tritonserver-rs are comparing it to the libraries listed below
Sorting:
- Rust library for running TensorRT accelerated deep learning models☆62Updated 4 years ago
- Rust wrapper for Microsoft's ONNX Runtime (version 1.8)☆312Updated last year
- ☆36Updated last year
- Rust wrapper for Microsoft's ONNX Runtime with CUDA support (version 1.7)☆24Updated 3 years ago
- A client library in Rust for Nvidia Triton.☆30Updated 2 years ago
- ☆26Updated 7 months ago
- Models and examples built with Burn☆305Updated 2 weeks ago
- Low rank adaptation (LoRA) for Candle.☆168Updated 7 months ago
- Rust bindings for OpenVINO™☆108Updated 3 months ago
- A framework for building high-performance real-time multiple object trackers☆252Updated 7 months ago
- A collection of optimisers for use with candle☆43Updated 3 months ago
- Asynchronous TensorRT for Rust.☆37Updated 2 months ago
- Your one stop CLI for ONNX model analysis.☆47Updated 3 years ago
- ONNX neural network inference engine☆258Updated this week
- An extension library to Candle that provides PyTorch functions not currently available in Candle☆40Updated last year
- Rust client for the huggingface hub aiming for minimal subset of features over `huggingface-hub` python package☆242Updated last month
- GPU based FFT written in Rust and CubeCL☆24Updated 5 months ago
- Example of tch-rs on M1☆55Updated last year
- Inference Llama 2 in one file of pure Rust 🦀☆233Updated 2 years ago
- implement llava using candle☆15Updated last year
- Read and write tensorboard data using Rust☆23Updated last year
- A Demo server serving Bert through ONNX with GPU written in Rust with <3☆41Updated 4 years ago
- LLM Orchestrator built in Rust☆284Updated last year
- Rust-tokenizer offers high-performance tokenizers for modern language models, including WordPiece, Byte-Pair Encoding (BPE) and Unigram (…☆327Updated 2 years ago
- Efficent platform for inference and serving local LLMs including an OpenAI compatible API server.☆529Updated last week
- Automatically derive Python dunder methods for your Rust code☆20Updated 7 months ago
- An example of using Torch rust bindings to serve trained machine learning models via Actix Web☆16Updated 4 years ago
- CLI utility to inspect and explore .safetensors and .gguf files☆34Updated 3 weeks ago
- ☆128Updated last year
- Rust port of sentence-transformers (https://github.com/UKPLab/sentence-transformers)☆122Updated last year