VITA-Group / Diffusion4DLinks
"Diffusion4D: Fast Spatial-temporal Consistent 4D Generation via Video Diffusion Models", Hanwen Liang*, Yuyang Yin*, Dejia Xu, Hanxue Liang, Zhangyang Wang, Konstantinos N. Plataniotis, Yao Zhao, Yunchao Wei
☆297Updated 4 months ago
Alternatives and similar repositories for Diffusion4D
Users that are interested in Diffusion4D are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] 3DitScene: Editing Any Scene via Language-guided Disentangled Gaussian Splatting☆230Updated 6 months ago
- GenXD: Generating Any 3D and 4D Scenes. ICLR 2025☆198Updated 2 months ago
- "4DGen: Grounded 4D Content Generation with Spatial-temporal Consistency", Yuyang Yin*, Dejia Xu*, Zhangyang Wang, Yao Zhao, Yunchao Wei☆236Updated 11 months ago
- [3DV-2025] Official implementation of "Controllable Text-to-3D Generation via Surface-Aligned Gaussian Splatting"☆204Updated 11 months ago
- [ECCV-2024] LN3Diff creates high-quality 3D object mesh from text within 8 V100-SECONDS.☆210Updated 2 weeks ago
- [CVPR'24] Text-to-3D Generation with Bidirectional Diffusion using both 2D and 3D priors☆169Updated last year
- Source code of paper "NVS-Solver: Video Diffusion Model as Zero-Shot Novel View Synthesizer"☆297Updated 2 months ago
- TC4D: Trajectory-Conditioned Text-to-4D Generation☆193Updated 7 months ago
- Official code for DreamEditor: Text-Driven 3D Scene Editing with Neural Fields (Siggraph Asia 2023)☆124Updated last year
- Official Implementation for STAG4D: Spatial-Temporal Anchored Generative 4D Gaussians☆189Updated 10 months ago
- [TPAMI 2025, NeurIPS 2024] Vidu4D: Single Generated Video to High-Fidelity 4D Reconstruction with Dynamic Gaussian Surfels☆362Updated 4 months ago
- Official repository for "SAR3D: Autoregressive 3D Object Generation and Understanding via Multi-scale 3D VQVAE"☆151Updated 2 weeks ago
- [AAAI 2025] DreamPhysics: Learning Physics-Based 3D Dynamics with Video Diffusion Priors☆204Updated 11 months ago
- [NeurIPS 2024] L4GM: Large 4D Gaussian Reconstruction Model☆201Updated 4 months ago
- V3D: Video Diffusion Models are Effective 3D Generators☆479Updated last year
- Generative Camera Dolly: Extreme Monocular Dynamic Novel View Synthesis (ECCV 2024 Oral) - Official Implementation☆253Updated 7 months ago
- List of papers on 4D Generation.☆274Updated 7 months ago
- [ECCV 2024] GaussCtrl: Multi-View Consistent Text-Driven 3D Gaussian Splatting Editing☆113Updated this week
- [NeurIPS 2024] GaussianCube: A Structured and Explicit Radiance Representation for 3D Generative Modeling☆406Updated 5 months ago
- 4D-fy: Text-to-4D Generation Using Hybrid Score Distillation Sampling☆326Updated 5 months ago
- [NeurIPS 2024] DreamScene4D: Dynamic Multi-Object Scene Generation from Monocular Videos☆226Updated 8 months ago
- [NeurIPS 2024] Animate3D: Animating Any 3D Model with Multi-view Video Diffusion☆191Updated 7 months ago
- (Siggraph Asia 2023) Official code of "HyperDreamer: Hyper-Realistic 3D Content Generation and Editing from a Single Image"☆208Updated 7 months ago
- [CVPR 2024 Highlight] ViVid-1-to-3: Novel View Synthesis with Video Diffusion Models☆165Updated 10 months ago
- ☆234Updated 10 months ago
- [ECCV 2024] Efficient Large-Baseline Radiance Fields, a feed-forward 2DGS model☆312Updated 10 months ago
- ☆197Updated 6 months ago
- Official PyTorch implementation of DiffTF (Accepted by ICLR2024)☆191Updated 10 months ago
- [CVPR 2024] EpiDiff: Enhancing Multi-View Synthesis via Localized Epipolar-Constrained Diffusion☆124Updated 9 months ago
- [ECCV2024] DreamScene: 3D Gaussian-based Text-to-3D Scene Generation via Formation Pattern Sampling