xlite-dev / flux-fasterLinks
A forked version of flux-fast that makes flux-fast even faster with cache-dit, 3.3x speedup on NVIDIA L20.
☆19Updated 2 weeks ago
Alternatives and similar repositories for flux-faster
Users that are interested in flux-faster are comparing it to the libraries listed below
Sorting:
- 🤗A Training-free and Easy-to-use Cache Acceleration Toolbox for Diffusion Transformers.☆117Updated this week
- A parallelism VAE avoids OOM for high resolution image generation☆68Updated 6 months ago
- (WIP) Parallel inference for black-forest-labs' FLUX model.☆19Updated 8 months ago
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆31Updated 8 months ago
- ☆39Updated 2 months ago
- [ECCV24] MixDQ: Memory-Efficient Few-Step Text-to-Image Diffusion Models with Metric-Decoupled Mixed Precision Quantization☆42Updated 8 months ago
- FastCache: Fast Caching for Diffusion Transformer Through Learnable Linear Approximation [Efficient ML Model]☆31Updated 2 months ago
- A CUDA kernel for NHWC GroupNorm for PyTorch