morousg / cvGPUSpeedupLinks
A faster implementation of OpenCV-CUDA that uses OpenCV objects, and more!
☆51Updated last week
Alternatives and similar repositories for cvGPUSpeedup
Users that are interested in cvGPUSpeedup are comparing it to the libraries listed below
Sorting:
- Model compression for ONNX☆96Updated 7 months ago
- Zero-copy multimodal vector DB with CUDA and CLIP/SigLIP☆59Updated 2 months ago
- Awesome code, projects, books, etc. related to CUDA☆19Updated this week
- A tool convert TensorRT engine/plan to a fake onnx☆40Updated 2 years ago
- CLIP and SigLIP models optimized with TensorRT with a Transformers-like API☆27Updated 9 months ago
- ☆33Updated last month
- IntLLaMA: A fast and light quantization solution for LLaMA☆18Updated last year
- A CUDA kernel for NHWC GroupNorm for PyTorch☆19Updated 8 months ago
- C++ implementations for various tokenizers (sentencepiece, tiktoken etc).☆32Updated this week
- Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.☆10Updated last year
- ☆27Updated 2 weeks ago
- NVIDIA TensorRT Hackathon 2023复赛选题:通义千问Qwen-7B用TensorRT-LLM模型搭建及优化☆42Updated last year
- HunyuanDiT with TensorRT and libtorch☆17Updated last year
- ☆18Updated 2 years ago
- An easy way to run, test, benchmark and tune OpenCL kernel files☆23Updated last year
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆38Updated last month
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆19Updated last week
- Snapdragon Neural Processing Engine (SNPE) SDKThe Snapdragon Neural Processing Engine (SNPE) is a Qualcomm Snapdragon software accelerate…☆34Updated 3 years ago
- FlexAttention w/ FlashAttention3 Support☆26Updated 9 months ago
- EfficientViT is a new family of vision models for efficient high-resolution vision.☆26Updated last year
- APPy (Annotated Parallelism for Python) enables users to annotate loops and tensor expressions in Python with compiler directives akin to…☆23Updated 3 weeks ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆110Updated 10 months ago
- A Toolkit to Help Optimize Large Onnx Model☆157Updated last year
- ☆14Updated 2 years ago
- [WIP] Better (FP8) attention for Hopper☆31Updated 4 months ago
- ☆17Updated last year
- JAX bindings for the flash-attention3 kernels☆11Updated 11 months ago
- Python scripts performing Open Vocabulary Object Detection using the YOLO-World model in ONNX.☆55Updated last year
- 天池 NVIDIA TensorRT Hackathon 2023 —— 生成式AI模型优化赛 初赛第三名方案☆49Updated last year
- Various test models in WNNX format. It can view with `pip install wnetron && wnetron`☆12Updated 3 years ago