ragor114 / InsertDiffusionLinks
Official implementation for the paper "InsertDiffusion: Identity Preserving Visualization of Objects through a Training-Free Diffusion Architecture".
☆21Updated last year
Alternatives and similar repositories for InsertDiffusion
Users that are interested in InsertDiffusion are comparing it to the libraries listed below
Sorting:
- CustomDiffusion360: Customizing Text-to-Image Diffusion with Camera Viewpoint Control☆172Updated last year
- [CVPR 2024] DreamComposer: Controllable 3D Object Generation via Multi-View Conditions☆134Updated last year
- Official Repo for the Paper Collaborative Video Diffusion: Consistent Multi-video Generation with Camera Control☆37Updated 11 months ago
- Fine-tuning code for SV3D☆110Updated last year
- Official implementation of "Perception-as-Control: Fine-grained Controllable Image Animation with 3D-aware Motion Representation" (ICCV 2…☆77Updated 4 months ago
- Official PyTorch implementation of "A Unified Approach for Text- and Image-guided 4D Scene Generation", [CVPR 2024]☆93Updated last year
- ☆90Updated last year
- official repo of paper for "CamI2V: Camera-Controlled Image-to-Video Diffusion Model"☆158Updated 2 months ago
- Official PyTorch implementation - Video Motion Transfer with Diffusion Transformers☆76Updated 4 months ago
- [ICLR 2025] Trajectory Attention For Fine-grained Video Motion Control☆95Updated 6 months ago
- ☆132Updated last year
- [ACM MM 2024] The official repo for "DreamLCM: Towards High-Quality Text-to-3D Generation via Latent Consisitency Model"☆16Updated last year
- AC3D: Analyzing and Improving 3D Camera Control in Video Diffusion Transformers☆141Updated 2 months ago
- Code for SPAD : Spatially Aware Multiview Diffusers, CVPR 2024☆176Updated 10 months ago
- [ECCV2024] ScaleDreamer: Scalable Text-to-3D Synthesis with Asynchronous Score Distillation☆53Updated 8 months ago
- (ICLR2024) This is the official PyTorch implementation of ICLR2024 paper: Text-to-3D with Classifier Score Distillation☆134Updated last year
- Diffusion Handles is a training-free method that enables 3D-aware image edits using a pre-trained Diffusion Model.☆41Updated 10 months ago
- Generative Omnimatte (CVPR 2025)☆153Updated 6 months ago
- Official implementation of "Force Prompting: Video Generation Models Can Learn and Generalize Physics-based Control Signals" (NeurIPS 202…☆139Updated 2 months ago
- Official code for "Amodal Completion via Progressive Mixed Context Diffusion" [CVPR 2024 Highlight]☆52Updated last year
- [CVPR 2024 Highlight] ViVid-1-to-3: Novel View Synthesis with Video Diffusion Models☆173Updated last year
- [CVPR 2024] EpiDiff: Enhancing Multi-View Synthesis via Localized Epipolar-Constrained Diffusion☆134Updated last year
- ☆131Updated last year
- ObjCtrl-2.5D☆57Updated 8 months ago
- [CVPR 2024] DreamControl: Control-Based Text-to-3D Generation with 3D Self-Prior☆73Updated last year
- [ECCV 2024] Official Implementation of DragAPart: Learning a Part-Level Motion Prior for Articulated Objects.☆83Updated last year
- ConsistNet: Enforcing 3D Consistency for Multi-view Images Diffusion☆48Updated 2 years ago
- ☆69Updated last year
- Unofficial Implementation of "Stable Video Diffusion Multi-View"☆80Updated last year
- TC4D: Trajectory-Conditioned Text-to-4D Generation☆204Updated last year