lodestone-rock / flowLinks
☆167Updated this week
Alternatives and similar repositories for flow
Users that are interested in flow are comparing it to the libraries listed below
Sorting:
- Official implementation of "Normalized Attention Guidance"☆179Updated 7 months ago
- ☆113Updated 9 months ago
- Official PyTorch Implementation of "Optimal Stepsize for Diffusion Sampling".☆195Updated 9 months ago
- The best OSS video generation models☆135Updated last year
- ☆91Updated 6 months ago
- Multimodal captioner☆210Updated this week
- 🔬 Visualize attention layers from Stable Diffusion☆91Updated 10 months ago
- The official code for NeurIPS 2025 "MagCache: Fast Video Generation with Magnitude-Aware Cache"☆258Updated 2 months ago
- ☆100Updated 3 months ago
- ☆79Updated 11 months ago
- ☆235Updated 9 months ago
- ☆75Updated 8 months ago
- IP Adapter Instruct☆211Updated last year
- See original repo here: https://github.com/google/RB-Modulation - ICLR 2025 (Oral)☆126Updated last year
- A detailed diagram laying out the full Flux.1 [dev] architecture as shared by Black Forest Labs at https://github.com/black-forest-labs/f…☆83Updated last year
- CogVideoX-LoRAs is a centralized repository for all LoRA models created for CogVideoX, filling the gap for a unified sharing space. With …☆81Updated last year
- Generate long weighted prompt embeddings for Stable Diffusion☆147Updated 9 months ago
- Official code for VMix: Improving Text-to-Image Diffusion Model with Cross-Attention Mixing Control☆191Updated last year
- Tiny AutoEncoder for Hunyuan Video (and other video models)☆294Updated 3 weeks ago
- Keyframe Interpolation with CogvideoX☆139Updated last year
- MoD Control Tile Upscaler for SDXL Pipeline☆61Updated 11 months ago
- 🔥🔥 Official Repo of UMO: Scaling Multi-Identity Consistency for Image Customization via Matching Reward☆179Updated 4 months ago
- Various training scripts used to train bigasp☆111Updated 5 months ago
- ☆173Updated 4 months ago
- Text and image to video generation: Kandinsky 4.0 (2024)☆149Updated last year
- Vision Transformers Needs Registers. And Gated MLPs. And +20M params. Tiny modality gap ensues!☆47Updated 8 months ago
- An inference and training framework for multiple image input in Flux Kontext dev☆436Updated 5 months ago
- ☆166Updated last year
- Accelerates Flux.1 image generation, just by using this node.☆140Updated last year
- [ICLR'2026] Scale-wise Distillation of Diffusion Models☆113Updated 4 months ago