VITA-Group / Diffusion4DLinks
"Diffusion4D: Fast Spatial-temporal Consistent 4D Generation via Video Diffusion Models", Hanwen Liang*, Yuyang Yin*, Dejia Xu, Hanxue Liang, Zhangyang Wang, Konstantinos N. Plataniotis, Yao Zhao, Yunchao Wei
☆320Updated 8 months ago
Alternatives and similar repositories for Diffusion4D
Users that are interested in Diffusion4D are comparing it to the libraries listed below
Sorting:
- Source code of paper "NVS-Solver: Video Diffusion Model as Zero-Shot Novel View Synthesizer"☆305Updated 6 months ago
- Official repository for "SAR3D: Autoregressive 3D Object Generation and Understanding via Multi-scale 3D VQVAE"☆175Updated 3 months ago
- List of papers on 4D Generation.☆299Updated 11 months ago
- [ICLR 2025] 3DitScene: Editing Any Scene via Language-guided Disentangled Gaussian Splatting☆249Updated 10 months ago
- Generative Camera Dolly: Extreme Monocular Dynamic Novel View Synthesis (ECCV 2024 Oral) - Official Implementation☆269Updated 3 weeks ago
- [ECCV-2024] LN3Diff creates high-quality 3D object mesh from text within 8 V100-SECONDS.☆222Updated 4 months ago
- [NeurIPS 2024] DreamScene4D: Dynamic Multi-Object Scene Generation from Monocular Videos☆229Updated last year
- Official Implementation for STAG4D: Spatial-Temporal Anchored Generative 4D Gaussians☆195Updated last year
- [ECCV 2024] GaussCtrl: Multi-View Consistent Text-Driven 3D Gaussian Splatting Editing☆120Updated 4 months ago
- "4DGen: Grounded 4D Content Generation with Spatial-temporal Consistency", Yuyang Yin*, Dejia Xu*, Zhangyang Wang, Yao Zhao, Yunchao Wei☆240Updated last year
- TC4D: Trajectory-Conditioned Text-to-4D Generation☆202Updated 11 months ago
- [T-PAMI 2025] V3D: Video Diffusion Models are Effective 3D Generators☆502Updated last year
- [ICLR2025] GenPercept: Diffusion Models Trained with Large Data Are Transferable Visual Models☆207Updated 8 months ago
- Gaussian splatting implementation of Instruct-NeRF2NeRF: Editing 3D Scenes with Instructions☆109Updated last year
- [3DV-2025] Official implementation of "Controllable Text-to-3D Generation via Surface-Aligned Gaussian Splatting"☆206Updated last year
- GenXD: Generating Any 3D and 4D Scenes. ICLR 2025☆212Updated 6 months ago
- [AAAI 2025] DreamPhysics: Learning Physics-Based 3D Dynamics with Video Diffusion Priors☆212Updated last year
- [CVPR'24] Text-to-3D Generation with Bidirectional Diffusion using both 2D and 3D priors☆170Updated last year
- AC3D: Analyzing and Improving 3D Camera Control in Video Diffusion Transformers☆134Updated 2 weeks ago
- [TPAMI 2025, NeurIPS 2024] Video4DGen: Enhancing Video and 4D Generation through Mutual Optimization☆365Updated 8 months ago
- [NeurIPS 2024] L4GM: Large 4D Gaussian Reconstruction Model☆220Updated 8 months ago
- VideoMV: Consistent Multi-View Generation Based on Large Video Generative Model☆172Updated last year
- [CVPR'24] Consistent Novel View Synthesis without 3D Representation☆165Updated last year
- [NeurIPS 2024] GaussianCube: A Structured and Explicit Radiance Representation for 3D Generative Modeling☆417Updated 9 months ago
- [ CVPR 2024 ] Implementation for "GPT-4V(ision) is a Human-Aligned Evaluator for Text-to-3D Generation"☆281Updated last year
- [CVPR 2025 Highlight] VideoScene: Distilling Video Diffusion Model to Generate 3D Scenes in One Step☆312Updated 3 months ago
- BrightDreamer: Generic 3D Gaussian Generative Framework for Fast Text-to-3D Synthesis☆89Updated 10 months ago
- [ECCV2024] DreamScene: 3D Gaussian-based Text-to-3D Scene Generation via Formation Pattern Sampling☆183Updated 2 months ago
- [ECCV 2024] Efficient Large-Baseline Radiance Fields, a feed-forward 2DGS model☆315Updated last year
- [CVPR 2024 Highlight] ViVid-1-to-3: Novel View Synthesis with Video Diffusion Models☆173Updated last year