VITA-Group / Comp4DLinks
"Comp4D: Compositional 4D Scene Generation", Dejia Xu*, Hanwen Liang*, Neel P. Bhatt, Hezhen Hu, Hanxue Liang, Konstantinos N. Plataniotis, and Zhangyang Wang
☆78Updated last year
Alternatives and similar repositories for Comp4D
Users that are interested in Comp4D are comparing it to the libraries listed below
Sorting:
- [ECCV 2024] Official Implementation of DragAPart: Learning a Part-Level Motion Prior for Articulated Objects.☆83Updated last year
- Official PyTorch implementation of "A Unified Approach for Text- and Image-guided 4D Scene Generation", [CVPR 2024]☆93Updated last year
- [ECCV 2024] HiFi-123: Towards High-fidelity One Image to 3D Content Generation☆67Updated last year
- ☆87Updated 7 months ago
- Official code for 4Diffusion: Multi-view Video Diffusion Model for 4D Generation.☆118Updated last year
- Source code for paper GS-DiT: Advancing Video Generation with Pseudo 4D Gaussian Fields through Efficient Dense 3D Point Tracking☆56Updated last year
- [ICLR 2025] Diffusion²: Dynamic 3D Content Generation via Score Composition of Video and Multi-view Diffusion Models☆55Updated 10 months ago
- [ECCV 2024] Official code for: SC4D: Sparse-Controlled Video-to-4D Generation and Motion Transfer☆112Updated 6 months ago
- [CVPR2025] Prometheus: 3D-Aware Latent Diffusion Models for Feed-Forward Text-to-3D Scene Generation☆138Updated 6 months ago
- Official Repo for the Paper Collaborative Video Diffusion: Consistent Multi-video Generation with Camera Control☆37Updated last year
- Official code for paper: F3D-Gaus: Feed-forward 3D-aware Generation on ImageNet with Cycle-Aggregative Gaussian Splatting☆50Updated 10 months ago
- ☆70Updated last year
- WideRange4D: Enabling High-Quality 4D Reconstruction with Wide-Range Movements and Scenes☆107Updated 9 months ago
- open-sourced video dataset with dynamic scenes and camera movements annotation☆83Updated 8 months ago
- 📷 Camera-controlled text-to-video generation, now with intrinsics, distortion and orientation control!☆106Updated last week
- [CVPR 2024 Hightlight] Code release for "The More You See in 2D, the More You Perceive in 3D"☆64Updated last year
- BrightDreamer: Generic 3D Gaussian Generative Framework for Fast Text-to-3D Synthesis☆89Updated last year
- [CVPR 2024] MVD-Fusion: Single-view 3D via Depth-consistent Multi-view Generation☆131Updated last year
- Official code for ECCV 2024 paper: Learn to Optimize Denoising Scores A Unified and Improved Diffusion Prior for 3D Generation☆72Updated last year
- About Official code for TIP-Editor: An Accurate 3D Editor Following Both Text-Prompts And Image-Prompts (Siggraph 2024 & TOG)☆121Updated last year
- [CVPR 2024 Highlight] ViVid-1-to-3: Novel View Synthesis with Video Diffusion Models☆174Updated last year
- An unofficial implementation of DreamScene360.☆83Updated last year
- Unofficial Implementation of "Stable Video Diffusion Multi-View"☆79Updated last year
- TC4D: Trajectory-Conditioned Text-to-4D Generation☆203Updated last year
- ☆95Updated 8 months ago
- [CVPR 2025] Official code for the paper "SplatFlow: Multi-View Rectified Flow Model for 3D Gaussian Splatting Synthesis"☆131Updated 10 months ago
- [NeurIPS 2024] Geometry-Aware Large Reconstruction Model for Efficient and High-Quality 3D Generation☆172Updated last year
- MEt3R: Measuring Multi-View Consistency in Generated Images☆147Updated 6 months ago
- Code for SPAD : Spatially Aware Multiview Diffusers, CVPR 2024☆175Updated 11 months ago
- [CVPR'24] GraphDreamer: a novel framework of generating compositional 3D scenes from scene graphs.☆195Updated last year