timudk / flux_triton
Writing FLUX in Triton
☆32Updated 6 months ago
Alternatives and similar repositories for flux_triton:
Users that are interested in flux_triton are comparing it to the libraries listed below
- ☆32Updated 4 months ago
- Minimal Differentiable Image Reward Functions☆51Updated 2 weeks ago
- Official repository for VQDM:Accurate Compression of Text-to-Image Diffusion Models via Vector Quantization paper☆33Updated 6 months ago
- ☆27Updated 7 months ago
- Triton kernels for Flux☆20Updated 2 months ago
- WIP Pytorch code for stably training single-step, mode-dropping, deterministic autoencoders☆25Updated 10 months ago
- (WIP) Parallel inference for black-forest-labs' FLUX model.☆18Updated 4 months ago
- ☆21Updated 9 months ago
- PyTorch half precision gemm lib w/ fused optional bias + optional relu/gelu☆55Updated 3 months ago
- ☆37Updated 10 months ago
- Implementation of SmoothCache, a project aimed at speeding-up Diffusion Transformer (DiT) based GenAI models with error-guided caching.☆39Updated last week
- TerDiT: Ternary Diffusion Models with Transformers☆69Updated 9 months ago
- LoRA fine-tune directly on the quantized models.☆27Updated 3 months ago
- Recaption large (Web)Datasets with vllm and save the artifacts.☆48Updated 3 months ago
- Official codebase for Margin-aware Preference Optimization for Aligning Diffusion Models without Reference (MaPO).☆71Updated 9 months ago
- ☆27Updated 10 months ago
- Omegance: A Single Parameter for Various Granularities in Diffusion-Based Synthesis (arXiv, 2024)☆50Updated 3 months ago
- The official repo of continuous speculative decoding☆25Updated 4 months ago
- ☆43Updated 3 weeks ago
- Focused on fast experimentation and simplicity☆69Updated 2 months ago
- ☆25Updated 9 months ago
- PEA-Diffusion: Parameter-Efficient Adapter with Knowledge Distillation in non-English Text-to-Image Generation☆30Updated 4 months ago
- Official repository for ICML 2024 paper "MoRe Fine-Tuning with 10x Fewer Parameters"☆17Updated last week
- Official PyTorch Implementation for Paper "No More Adam: Learning Rate Scaling at Initialization is All You Need"☆50Updated last month
- Patch convolution to avoid large GPU memory usage of Conv2D☆84Updated last month