Combining Teacache with xDiT to Accelerate Visual Generation Models
☆32Apr 21, 2025Updated 11 months ago
Alternatives and similar repositories for Teacache-xDiT
Users that are interested in Teacache-xDiT are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- https://wavespeed.ai/ Context parallel attention that accelerates DiT model inference with dynamic caching☆426Jul 5, 2025Updated 9 months ago
- Fast and memory-efficient exact attention☆20Apr 10, 2026Updated last week
- An out-of-the-box inference acceleration engine for Diffusion and DiT models☆60Mar 21, 2025Updated last year
- ☆392Apr 9, 2026Updated last week
- This project is based on the [LTX-Video](https://github.com/Lightricks/LTX-Video) algorithm of the diffusers and optimized and accelerate…☆13Dec 31, 2024Updated last year
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- KsanaDiT: High-Performance DiT (Diffusion Transformer) Inference Framework for Video & Image Generation☆48Mar 30, 2026Updated 2 weeks ago
- Timestep Embedding Tells: It's Time to Cache for Video Diffusion Model☆1,305Jun 8, 2025Updated 10 months ago
- A forked version of flux-fast that makes flux-fast even faster with cache-dit, 3.3x speedup on NVIDIA L20.☆24Jul 18, 2025Updated 9 months ago
- ☆38Dec 18, 2025Updated 4 months ago
- A parallelism VAE avoids OOM for high resolution image generation☆90Mar 12, 2026Updated last month
- ☆10Jan 24, 2024Updated 2 years ago
- The official implementation of PTQD: Accurate Post-Training Quantization for Diffusion Models☆103Mar 12, 2024Updated 2 years ago
- [CVPR 2024] DeepCache: Accelerating Diffusion Models for Free☆965Jun 27, 2024Updated last year
- ☆18Dec 2, 2024Updated last year
- Deploy open-source AI quickly and easily - Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- Effortlessly deploy CREStereo in PyTorch with a simple pip install.☆14Mar 1, 2024Updated 2 years ago
- 2018-2024 in-depth completion of top papers, open source code summary! (Continuous update)☆14Sep 1, 2024Updated last year
- ☆10Oct 5, 2022Updated 3 years ago
- 4th International Scan-to-BIM competition KUL and FBK repo with CODE☆23Jan 10, 2025Updated last year
- [ICLR'25] ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation☆156Mar 21, 2025Updated last year
- EleutherAI ML Performance reading group repository (slides, meeting recordings, annotated papers)☆31Mar 20, 2026Updated 3 weeks ago
- [NeurIPS 2024] AsyncDiff: Parallelizing Diffusion Models by Asynchronous Denoising☆214Sep 27, 2025Updated 6 months ago
- MoviiGen 1.1: Towards Cinematic-Quality Video Generative Models☆184Jul 21, 2025Updated 8 months ago
- 📚A curated list of Awesome Diffusion Inference Papers with Codes: Sampling, Cache, Quantization, Parallelism, etc.🎉☆538Mar 19, 2026Updated 3 weeks ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- This is the official training code of OmniSVG☆39Jan 19, 2026Updated 2 months ago
- WeiYun Downloader☆14Jul 18, 2014Updated 11 years ago
- [Technical Report] A Comprehensive Evaluation of Nano Banana Pro on 14 Low-Level Vision Tasks and 40 Datasets☆71Dec 24, 2025Updated 3 months ago
- [ICLR 2025] FasterCache: Training-Free Video Diffusion Model Acceleration with High Quality☆262Dec 27, 2024Updated last year
- [ICLR2025, ICML2025, NeurIPS2025 Spotlight] Quantized Attention achieves speedup of 2-5x compared to FlashAttention, without losing end-t…☆3,296Jan 17, 2026Updated 3 months ago
- FastCache: Fast Caching for Diffusion Transformer Through Learnable Linear Approximation [Efficient ML Model]☆48Feb 17, 2026Updated 2 months ago
- 🎬 3.7× faster video generation E2E 🖼️ 1.6× faster image generation E2E ⚡ ColumnSparseAttn 9.3× vs FlashAttn‑3 💨 ColumnSparseGEMM 2.5× …☆104Sep 8, 2025Updated 7 months ago
- [NeurIPS 2025 D&B🔥] OpenS2V-Nexus: A Detailed Benchmark and Million-Scale Dataset for Subject-to-Video Generation☆208Updated this week
- Contrastive Object-level Pre-training with Spatial Noise Curriculum Learning☆20Feb 4, 2022Updated 4 years ago
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆65Oct 25, 2025Updated 5 months ago
- Official PyTorch implementation of The Linear Attention Resurrection in Vision Transformer☆16Sep 7, 2024Updated last year
- Pytorch Fast R-CNN and Faster R-CNN Implementation. Uses latest PyTorch and TorchVision. No installation required.☆15Feb 6, 2023Updated 3 years ago
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆664Jan 15, 2026Updated 3 months ago
- [ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.☆976Feb 25, 2026Updated last month
- A unified inference and post-training framework for accelerated video generation.☆3,396Updated this week
- The implementation for FREE-Merging: Fourier Transform for Model Merging with Lightweight Experts (ICCV25)☆14Jun 26, 2025Updated 9 months ago