VITA-Group / Diffusion4DLinks
[NeurIPS 2024] Diffusion4D: Fast Spatial-temporal Consistent 4D Generation via Video Diffusion Models
☆331Updated 11 months ago
Alternatives and similar repositories for Diffusion4D
Users that are interested in Diffusion4D are comparing it to the libraries listed below
Sorting:
- Official repository for "SAR3D: Autoregressive 3D Object Generation and Understanding via Multi-scale 3D VQVAE"☆186Updated 6 months ago
- Source code of paper "NVS-Solver: Video Diffusion Model as Zero-Shot Novel View Synthesizer"☆308Updated 8 months ago
- [ICLR 2025] 3DitScene: Editing Any Scene via Language-guided Disentangled Gaussian Splatting☆256Updated last year
- Generative Camera Dolly: Extreme Monocular Dynamic Novel View Synthesis (ECCV 2024 Oral) - Official Implementation☆278Updated last month
- List of papers on 4D Generation.☆317Updated last year
- [NeurIPS 2024] DreamScene4D: Dynamic Multi-Object Scene Generation from Monocular Videos☆231Updated last year
- TC4D: Trajectory-Conditioned Text-to-4D Generation☆204Updated last year
- [AAAI 2025] DreamPhysics: Learning Physics-Based 3D Dynamics with Video Diffusion Priors☆224Updated last year
- Official Implementation for STAG4D: Spatial-Temporal Anchored Generative 4D Gaussians☆201Updated last year
- AC3D: Analyzing and Improving 3D Camera Control in Video Diffusion Transformers☆145Updated 3 months ago
- [NeurIPS 2024] L4GM: Large 4D Gaussian Reconstruction Model☆234Updated 11 months ago
- [CVPR 2024 Highlight] ViVid-1-to-3: Novel View Synthesis with Video Diffusion Models☆175Updated last year
- [ECCV 2024] GaussCtrl: Multi-View Consistent Text-Driven 3D Gaussian Splatting Editing☆126Updated 6 months ago
- [ECCV-2024] LN3Diff creates high-quality 3D object mesh from text within 8 V100-SECONDS.☆224Updated last month
- [NeurIPS 2024] Geometry-Aware Large Reconstruction Model for Efficient and High-Quality 3D Generation☆172Updated last year
- [TPAMI 2025, NeurIPS 2024] Video4DGen: Enhancing Video and 4D Generation through Mutual Optimization☆367Updated 11 months ago
- [ICLR2025] GenPercept: Diffusion Models Trained with Large Data Are Transferable Visual Models☆215Updated 11 months ago
- [3DV-2025] Official implementation of "Controllable Text-to-3D Generation via Surface-Aligned Gaussian Splatting"☆210Updated last year
- GenXD: Generating Any 3D and 4D Scenes. ICLR 2025☆219Updated 8 months ago
- "4DGen: Grounded 4D Content Generation with Spatial-temporal Consistency", Yuyang Yin*, Dejia Xu*, Zhangyang Wang, Yao Zhao, Yunchao Wei☆245Updated last year
- [T-PAMI 2025] V3D: Video Diffusion Models are Effective 3D Generators☆512Updated last year
- [CVPR 2025 Highlight] VideoScene: Distilling Video Diffusion Model to Generate 3D Scenes in One Step☆328Updated 5 months ago
- [CVPR'24] Text-to-3D Generation with Bidirectional Diffusion using both 2D and 3D priors☆170Updated last year
- [ICCV 2025] Free4D: Tuning-free 4D Scene Generation with Spatial-Temporal Consistency☆231Updated last month
- Code for "Director3D: Real-world Camera Trajectory and 3D Scene Generation from Text" (NeurIPS 2024).☆368Updated 9 months ago
- [CVPR2025] Prometheus: 3D-Aware Latent Diffusion Models for Feed-Forward Text-to-3D Scene Generation☆137Updated 5 months ago
- [ECCV 2024] Efficient Large-Baseline Radiance Fields, a feed-forward 2DGS model☆313Updated last year
- BrightDreamer: Generic 3D Gaussian Generative Framework for Fast Text-to-3D Synthesis☆89Updated last year
- Official implementation of Physics3D: Learning Physical Properties of 3D Gaussians via Video Diffusion☆224Updated last year
- [NeurIPS 2024] GaussianCube: A Structured and Explicit Radiance Representation for 3D Generative Modeling☆422Updated last year