WikiChao / ScalingConceptLinks
☆23Updated 10 months ago
Alternatives and similar repositories for ScalingConcept
Users that are interested in ScalingConcept are comparing it to the libraries listed below
Sorting:
- This is the official repository for "LatentMan: Generating Consistent Animated Characters using Image Diffusion Models" [CVPRW 2024]☆22Updated last year
- The official UniVerse-1 code.☆50Updated last week
- ☆47Updated last month
- Trying to implement https://arxiv.org/abs/2305.08891☆34Updated 2 years ago
- Official implementation of "VSTAR: Generative Temporal Nursing for Longer Dynamic Video Synthesis"☆19Updated 7 months ago
- [NeurIPS 2024] Official Implementation of GrounDiT☆56Updated 9 months ago
- MotionShop: Zero-Shot Motion Transfer in Video Diffusion Models with Mixture of Score Guidance☆26Updated 9 months ago
- ☆31Updated 5 months ago
- We introduce OpenStory++, a large-scale open-domain dataset focusing on enabling MLLMs to perform storytelling generation tasks.☆15Updated last year
- [ECCV 2024] PanoFree: Tuning-Free Holistic Multi-view Image Generation with Cross-view Self-Guidance☆23Updated last year
- ☆64Updated last year
- code for "TVG: A Training-free Transition Video Generation Method with Diffusion Models"☆42Updated last year
- ☆66Updated last year
- [ICLR 2024] Code for FreeNoise based on LaVie☆34Updated last year
- ☆17Updated last year
- Frame Guidance: Training-Free Guidance for Frame-Level Control in Video Diffusion Model (Arxiv 2025)☆32Updated 2 months ago
- The public source code of "FreCaS: Efficient Higher-Resolution Image Generation via Frequency-aware Cascaded Sampling"☆27Updated 2 months ago
- CutDiffusion: A Simple, Fast, Cheap, and Strong Diffusion Extrapolation Method☆27Updated last year
- [ACM MM24] MotionMaster: Training-free Camera Motion Transfer For Video Generation☆93Updated 11 months ago
- This is the project for 'Any2Caption', Interpreting Any Condition to Caption for Controllable Video Generation☆44Updated 5 months ago
- [ECCV 2024] Noise Calibration: Plug-and-play Content-Preserving Video Enhancement using Pre-trained Video Diffusion Models☆87Updated last year
- Official pytorch implementation for SingleInsert☆28Updated last year
- [arXiv 2024] I4VGen: Image as Free Stepping Stone for Text-to-Video Generation☆24Updated 11 months ago
- The official code for "IllumiCraft: Unified Geometry and Illumination Diffusion for Controllable Video Generation"☆19Updated 3 months ago
- [CVPR 2025] Zero-1-to-A: Zero-Shot One Image to Animatable Head Avatars Using Video Diffusion☆41Updated 5 months ago
- [ NeurIPS 2024 D&B Track ] Implementation for "FiVA: Fine-grained Visual Attribute Dataset for Text-to-Image Diffusion Models"☆71Updated 8 months ago
- An official implementation of SwapAnyone.☆66Updated 6 months ago
- Learning Motion from Low-Rank Adaptation☆45Updated last year
- [ECCV 2024] IDOL: Unified Dual-Modal Latent Diffusion for Human-Centric Joint Video-Depth Generation☆55Updated last year
- Navigate dreamscapes with a click – your chosen point guides the drone’s flight in a thrilling visual journey.☆48Updated 2 weeks ago