zqh0253 / 3DitSceneLinks
[ICLR 2025] 3DitScene: Editing Any Scene via Language-guided Disentangled Gaussian Splatting
☆255Updated last year
Alternatives and similar repositories for 3DitScene
Users that are interested in 3DitScene are comparing it to the libraries listed below
Sorting:
- Official Implementation for STAG4D: Spatial-Temporal Anchored Generative 4D Gaussians☆201Updated last year
- [3DV-2025] Official implementation of "Controllable Text-to-3D Generation via Surface-Aligned Gaussian Splatting"☆207Updated last year
- [CVPR2025] MVGenMaster: Scaling Multi-View Generation from Any Image via 3D Priors Enhanced Diffusion Model☆126Updated 6 months ago
- TC4D: Trajectory-Conditioned Text-to-4D Generation☆204Updated last year
- official repo of paper for "CamI2V: Camera-Controlled Image-to-Video Diffusion Model"☆156Updated 2 months ago
- [NeurIPS 2024] MVInpainter: Learning Multi-View Consistent Inpainting to Bridge 2D and 3D Editing☆132Updated last year
- GenXD: Generating Any 3D and 4D Scenes. ICLR 2025☆218Updated 8 months ago
- [NeurIPS 2024] Geometry-Aware Large Reconstruction Model for Efficient and High-Quality 3D Generation☆172Updated last year
- About Official code for TIP-Editor: An Accurate 3D Editor Following Both Text-Prompts And Image-Prompts (Siggraph 2024 & TOG)☆119Updated last year
- Official code of "Imagine360: Immersive 360 Video Generation from Perspective Anchor"☆148Updated 6 months ago
- [CVPR'24] Consistent Novel View Synthesis without 3D Representation☆166Updated last year
- [NeurIPS 2024] DreamScene4D: Dynamic Multi-Object Scene Generation from Monocular Videos☆230Updated last year
- ☆156Updated 3 months ago
- BrightDreamer: Generic 3D Gaussian Generative Framework for Fast Text-to-3D Synthesis☆89Updated last year
- [CVPR'24] Text-to-3D Generation with Bidirectional Diffusion using both 2D and 3D priors☆170Updated last year
- [ECCV 2024] DGE: Direct Gaussian 3D Editing by Consistent Multi-view Editing☆121Updated 4 months ago
- "4DGen: Grounded 4D Content Generation with Spatial-temporal Consistency", Yuyang Yin*, Dejia Xu*, Zhangyang Wang, Yao Zhao, Yunchao Wei☆244Updated last year
- [CVPR 2024] EpiDiff: Enhancing Multi-View Synthesis via Localized Epipolar-Constrained Diffusion☆134Updated last year
- Official PyTorch implementation of "A Unified Approach for Text- and Image-guided 4D Scene Generation", [CVPR 2024]☆93Updated last year
- [ECCV 2024] Make-Your-3D: Fast and Consistent Subject-Driven 3D Content Generation☆127Updated last year
- VideoMV: Consistent Multi-View Generation Based on Large Video Generative Model☆174Updated last year
- [ICCV 2025] Free4D: Tuning-free 4D Scene Generation with Spatial-Temporal Consistency☆224Updated last month
- AC3D: Analyzing and Improving 3D Camera Control in Video Diffusion Transformers☆140Updated 2 months ago
- Source code of paper "NVS-Solver: Video Diffusion Model as Zero-Shot Novel View Synthesizer"☆308Updated 8 months ago
- [CVPR 2024 Highlight] ViVid-1-to-3: Novel View Synthesis with Video Diffusion Models☆173Updated last year
- CustomDiffusion360: Customizing Text-to-Image Diffusion with Camera Viewpoint Control☆171Updated last year
- We have released official implementation in https://github.com/VAST-AI-Research/MIDI-3D☆127Updated 8 months ago
- [NeurIPS 2024] Animate3D: Animating Any 3D Model with Multi-view Video Diffusion☆214Updated last year
- ☆96Updated last year
- [ICLR 2024] Official Implementation of Consistent4D: Consistent 360° Dynamic Object Generation from Monocular Video☆275Updated last year