Libraries-Openly-Fused / cvGPUSpeedupLinks
A faster implementation of OpenCV-CUDA that uses OpenCV objects, and more!
☆53Updated last week
Alternatives and similar repositories for cvGPUSpeedup
Users that are interested in cvGPUSpeedup are comparing it to the libraries listed below
Sorting:
- Model compression for ONNX☆97Updated 10 months ago
- A tool convert TensorRT engine/plan to a fake onnx☆41Updated 2 years ago
- Awesome code, projects, books, etc. related to CUDA☆24Updated last month
- HunyuanDiT with TensorRT and libtorch☆18Updated last year
- Zero-copy multimodal vector DB with CUDA and CLIP/SigLIP☆61Updated 4 months ago
- ☆19Updated 2 years ago
- Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.☆10Updated last year
- Snapdragon Neural Processing Engine (SNPE) SDKThe Snapdragon Neural Processing Engine (SNPE) is a Qualcomm Snapdragon software accelerate…☆34Updated 3 years ago
- NVIDIA TensorRT Hackathon 2023复赛选题:通义千问Qwen-7B用TensorRT-LLM模型搭建及优化☆42Updated last year
- ☆33Updated 3 months ago
- ONNX Command-Line Toolbox☆35Updated 11 months ago
- Wanwu models release, code will be released soon☆24Updated 3 years ago
- A CUDA kernel for NHWC GroupNorm for PyTorch☆20Updated 10 months ago
- IntLLaMA: A fast and light quantization solution for LLaMA☆18Updated 2 years ago
- 天池 NVIDIA TensorRT Hackathon 2023 —— 生成式AI模型优化赛 初赛第三名方案☆50Updated 2 years ago
- A Toolkit to Help Optimize Large Onnx Model☆159Updated last year
- Nsight Systems In Docker☆20Updated last year
- Implementation of a methodology that allows all sorts of user defined GPU kernel fusion, for non CUDA programmers.☆23Updated this week
- An easy way to run, test, benchmark and tune OpenCL kernel files☆23Updated 2 years ago
- ☆15Updated 4 months ago
- C++ implementations for various tokenizers (sentencepiece, tiktoken etc).☆35Updated last week
- ☆27Updated 2 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆42Updated 3 months ago
- Quantize transformers to any learned arbitrary 4-bit numeric format☆45Updated 2 months ago
- EfficientViT is a new family of vision models for efficient high-resolution vision.☆27Updated last year
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆19Updated last week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆17Updated last year
- Mobile App Open☆61Updated this week
- FlexAttention w/ FlashAttention3 Support☆27Updated 11 months ago
- ☆11Updated last month