VITA-Group / Diffusion4DLinks
"Diffusion4D: Fast Spatial-temporal Consistent 4D Generation via Video Diffusion Models", Hanwen Liang*, Yuyang Yin*, Dejia Xu, Hanxue Liang, Zhangyang Wang, Konstantinos N. Plataniotis, Yao Zhao, Yunchao Wei
☆325Updated 9 months ago
Alternatives and similar repositories for Diffusion4D
Users that are interested in Diffusion4D are comparing it to the libraries listed below
Sorting:
- Source code of paper "NVS-Solver: Video Diffusion Model as Zero-Shot Novel View Synthesizer"☆307Updated 7 months ago
- Generative Camera Dolly: Extreme Monocular Dynamic Novel View Synthesis (ECCV 2024 Oral) - Official Implementation☆273Updated this week
- Official repository for "SAR3D: Autoregressive 3D Object Generation and Understanding via Multi-scale 3D VQVAE"☆180Updated 5 months ago
- [ICLR 2025] 3DitScene: Editing Any Scene via Language-guided Disentangled Gaussian Splatting☆254Updated 11 months ago
- List of papers on 4D Generation.☆307Updated last year
- [NeurIPS 2024] DreamScene4D: Dynamic Multi-Object Scene Generation from Monocular Videos☆230Updated last year
- TC4D: Trajectory-Conditioned Text-to-4D Generation☆203Updated last year
- Official Implementation for STAG4D: Spatial-Temporal Anchored Generative 4D Gaussians☆199Updated last year
- [CVPR'24] Text-to-3D Generation with Bidirectional Diffusion using both 2D and 3D priors☆168Updated last year
- "4DGen: Grounded 4D Content Generation with Spatial-temporal Consistency", Yuyang Yin*, Dejia Xu*, Zhangyang Wang, Yao Zhao, Yunchao Wei☆243Updated last year
- [3DV-2025] Official implementation of "Controllable Text-to-3D Generation via Surface-Aligned Gaussian Splatting"☆206Updated last year
- [NeurIPS 2024] L4GM: Large 4D Gaussian Reconstruction Model☆226Updated 9 months ago
- [ECCV 2024] GaussCtrl: Multi-View Consistent Text-Driven 3D Gaussian Splatting Editing☆124Updated 5 months ago
- [ECCV-2024] LN3Diff creates high-quality 3D object mesh from text within 8 V100-SECONDS.☆222Updated 5 months ago
- GenXD: Generating Any 3D and 4D Scenes. ICLR 2025☆218Updated 7 months ago
- [NeurIPS 2024] MVInpainter: Learning Multi-View Consistent Inpainting to Bridge 2D and 3D Editing☆130Updated last year
- [CVPR2025] Prometheus: 3D-Aware Latent Diffusion Models for Feed-Forward Text-to-3D Scene Generation☆131Updated 4 months ago
- Code for "Director3D: Real-world Camera Trajectory and 3D Scene Generation from Text" (NeurIPS 2024).☆360Updated 8 months ago
- [NeurIPS 2024] Geometry-Aware Large Reconstruction Model for Efficient and High-Quality 3D Generation☆172Updated last year
- [TPAMI 2025, NeurIPS 2024] Video4DGen: Enhancing Video and 4D Generation through Mutual Optimization☆365Updated 10 months ago
- [ECCV 2024] Efficient Large-Baseline Radiance Fields, a feed-forward 2DGS model☆313Updated last year
- [ CVPR 2024 ] Implementation for "GPT-4V(ision) is a Human-Aligned Evaluator for Text-to-3D Generation"☆280Updated last year
- [T-PAMI 2025] V3D: Video Diffusion Models are Effective 3D Generators☆506Updated last year
- VideoMV: Consistent Multi-View Generation Based on Large Video Generative Model☆173Updated last year
- [ICLR2025] GenPercept: Diffusion Models Trained with Large Data Are Transferable Visual Models☆211Updated 9 months ago
- ☆155Updated 3 months ago
- Code for SPAD : Spatially Aware Multiview Diffusers, CVPR 2024☆176Updated 9 months ago
- Code release of our paper "DI-PCG: Diffusion-based Efficient Inverse Procedural Content Generation for High-quality 3D Asset Creation".☆131Updated 7 months ago
- [CVPR'24] Consistent Novel View Synthesis without 3D Representation☆166Updated last year
- BrightDreamer: Generic 3D Gaussian Generative Framework for Fast Text-to-3D Synthesis☆88Updated last year