rajeevsrao / TensorRTLinks
TensorRT is a C++ library for high performance inference on NVIDIA GPUs and deep learning accelerators.
☆19Updated last year
Alternatives and similar repositories for TensorRT
Users that are interested in TensorRT are comparing it to the libraries listed below
Sorting:
- stable diffusion, controlnet, tensorrt, accelerate☆57Updated 2 years ago
- End-to-end recipes for optimizing diffusion models with torchao and diffusers (inference and FP8 training).☆360Updated 3 weeks ago
- https://wavespeed.ai/ Context parallel attention that accelerates DiT model inference with dynamic caching☆304Updated last month
- Faster generation with text-to-image diffusion models.☆215Updated 8 months ago
- [NeurIPS 2024] AsyncDiff: Parallelizing Diffusion Models by Asynchronous Denoising☆202Updated 4 months ago
- Experimental usage of stable-fast and TensorRT.☆208Updated 10 months ago
- Generate long weighted prompt embeddings for Stable Diffusion☆124Updated 2 months ago
- Flux diffusion model implementation using quantized fp8 matmul & remaining layers use faster half precision accumulate, which is ~2x fast…☆269Updated 8 months ago
- Accelerates Flux.1 image generation, just by using this node.☆136Updated 6 months ago
- ☆431Updated last year
- ☆55Updated last year
- Model Compression Toolbox for Large Language Models and Diffusion Models☆501Updated 2 months ago
- Deploy stable diffusion model with onnx/tenorrt + tritonserver☆123Updated last year
- An efficient implementation of Stable-Diffusion-XL☆47Updated last year
- implementation of the IPAdapter models for HF Diffusers☆174Updated last year
- ☆118Updated last year
- [ICLR 2025] FasterCache: Training-Free Video Diffusion Model Acceleration with High Quality☆231Updated 6 months ago
- ☆100Updated last year
- A Framework Code of Reviewing Stable Diffusion Checkpoint☆94Updated last year
- SSD-1B, an open-source text-to-image model, outperforming previous versions by being 50% smaller and 60% faster than SDXL.☆177Updated last year
- Optimum version of a UI for Stable Diffusion, running on ONNX models for faster inference, working on most common GPU vendors: NVIDIA,AMD…☆26Updated last year
- Diffusers training with mmengine☆102Updated last year
- The first open source triton inference engine for Stable Diffusion, specifically for sdxl☆12Updated last year
- ☆116Updated last week
- [CVPR 2024 Highlight] DistriFusion: Distributed Parallel Inference for High-Resolution Diffusion Models☆690Updated 6 months ago
- ☆283Updated 5 months ago
- ☆83Updated 10 months ago
- ONNX-Powered Inference for State-of-the-Art Face Upscalers☆98Updated 11 months ago
- A diffusers based implementation of HyperDreamBooth☆133Updated last year
- Official Repository of the paper "Trajectory Consistency Distillation"☆340Updated last year