FL33TW00D / steelixLinks
Your one stop CLI for ONNX model analysis.
☆47Updated 2 years ago
Alternatives and similar repositories for steelix
Users that are interested in steelix are comparing it to the libraries listed below
Sorting:
- Rust wrapper for Microsoft's ONNX Runtime with CUDA support (version 1.7)☆23Updated 2 years ago
- GPU based FFT written in Rust and CubeCL☆23Updated 2 weeks ago
- ☆26Updated last year
- A collection of optimisers for use with candle☆36Updated last month
- A client library in Rust for Nvidia Triton.☆30Updated last year
- An extension library to Candle that provides PyTorch functions not currently available in Candle☆39Updated last year
- A neural network inference library, written in Rust.☆63Updated 11 months ago
- ☆58Updated 2 years ago
- ☆20Updated 8 months ago
- A Rust Library for High-Performance Tensor Exchange with Python☆47Updated 2 weeks ago
- ☆23Updated 2 months ago
- ☆88Updated 5 months ago
- Rust crate for some audio utilities☆24Updated 3 months ago
- ☆30Updated 7 months ago
- 8-bit floating point types for Rust☆46Updated 3 months ago
- A Demo server serving Bert through ONNX with GPU written in Rust with <3☆40Updated 3 years ago
- ☆12Updated last year
- A diffusers API in Burn (Rust)☆19Updated 11 months ago
- Rust library for whisper.cpp compatible Mel spectrograms☆70Updated last month
- Asynchronous CUDA for Rust.☆33Updated 7 months ago
- 🦀 Example of serving deep learning models in Rust with batched prediction☆34Updated 2 years ago
- Experimental ONNX implementation for WASI NN.☆48Updated 3 years ago
- Rust implementation of Huggingface transformers pipelines using onnxruntime backend with bindings to C# and C.☆39Updated 2 years ago
- A minimal OpenCL, CUDA, Vulkan and host CPU array manipulation engine / framework.☆74Updated 2 weeks ago
- Rust library for running TensorRT accelerated deep learning models☆56Updated 3 years ago
- Experimental compiler for deep learning models☆67Updated last month
- Example of tch-rs on M1☆53Updated last year
- Low rank adaptation (LoRA) for Candle.☆150Updated 2 months ago
- 🔭 interactively explore `onnx` networks in your CLI.☆25Updated last year
- LLaMa 7b with CUDA acceleration implemented in rust. Minimal GPU memory needed!☆106Updated last year