Lednik7 / CLIP-ONNXLinks
It is a simple library to speed up CLIP inference up to 3x (K80 GPU)
☆219Updated last year
Alternatives and similar repositories for CLIP-ONNX
Users that are interested in CLIP-ONNX are comparing it to the libraries listed below
Sorting:
- Deploy stable diffusion model with onnx/tenorrt + tritonserver☆123Updated last year
- Low-latency ONNX and TensorRT based zero-shot classification and detection with contrastive language-image pre-training based prompts☆41Updated 9 months ago
- Efficiently read embedding in streaming from any filesystem☆98Updated last year
- An ONNX-based implementation of the CLIP model that doesn't depend on torch or torchvision.☆69Updated 11 months ago
- A Toolkit to Help Optimize Onnx Model☆153Updated this week
- ☆111Updated 3 years ago
- Script to typecast ONNX model parameters from INT64 to INT32.☆107Updated last year
- Crop using CLIP☆340Updated 2 years ago
- Python bindings for ggml☆141Updated 9 months ago
- The Triton backend for TensorRT.☆76Updated 3 weeks ago
- Export Donut model to onnx and run it with onnxruntime☆23Updated last year
- ☆53Updated 2 years ago
- Model compression for ONNX☆96Updated 6 months ago
- Official implementation of "Active Image Indexing"☆59Updated 2 years ago
- Python package to generate image embeddings with CLIP without PyTorch/TensorFlow☆151Updated 3 years ago
- Common utilities for ONNX converters☆270Updated 6 months ago
- A set of simple tools for splitting, merging, OP deletion, size compression, rewriting attributes and constants, OP generation, change op…☆293Updated last year
- Accelerate PyTorch models with ONNX Runtime☆363Updated 3 months ago
- The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.☆133Updated 2 weeks ago
- A repository containing datasets and tools to train a watermark classifier.☆68Updated 2 years ago
- Faster Arbitrarily-Shaped Text Detector with Minimalist Kernel Representation☆198Updated last week
- Get hundred of million of image+url from the crawling at home dataset and preprocess them☆220Updated last year
- Accelerate segment anything model inference using Tensorrt 8.6.1.6☆92Updated last year
- A Toolkit to Help Optimize Large Onnx Model☆158Updated last year
- Exporting Segment Anything, MobileSAM, and Segment Anything 2 into ONNX format for easy deployment☆341Updated 10 months ago
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆285Updated last year
- Porting of Pillow resize method in C++ and OpenCV.☆134Updated 2 years ago
- Examples for using ONNX Runtime for model training.☆338Updated 7 months ago
- Scailable ONNX python tools☆97Updated 7 months ago
- Diffusers training with mmengine☆100Updated last year