ViewDiff generates high-quality, multi-view consistent images of a real-world 3D object in authentic surroundings. (CVPR2024).
☆379Feb 16, 2026Updated last week
Alternatives and similar repositories for ViewDiff
Users that are interested in ViewDiff are comparing it to the libraries listed below
Sorting:
- [T-PAMI 2025] V3D: Video Diffusion Models are Effective 3D Generators☆514Mar 26, 2024Updated last year
- Code for SPAD : Spatially Aware Multiview Diffusers, CVPR 2024☆178Feb 8, 2025Updated last year
- [ICLR 2024 Spotlight] SyncDreamer: Generating Multiview-consistent Images from a Single-view Image☆1,027Oct 26, 2025Updated 4 months ago
- [CVPR2024 Oral] EscherNet: A Generative Model for Scalable View Synthesis☆367Sep 10, 2024Updated last year
- [CVPR 2024 Highlight] ViVid-1-to-3: Novel View Synthesis with Video Diffusion Models☆174Jul 24, 2024Updated last year
- Official implementation of "LucidDreamer: Towards High-Fidelity Text-to-3D Generation via Interval Score Matching"☆826May 24, 2024Updated last year
- MVDiffusion++: A Dense High-resolution Multi-view Diffusion Model for Single or Sparse-view 3D Object Reconstruction☆142Apr 27, 2024Updated last year
- Multi-view Diffusion for 3D Generation☆973Oct 7, 2023Updated 2 years ago
- ☆631Dec 9, 2024Updated last year
- MVDiffusion: Enabling Holistic Multi-view Image Generation with Correspondence-Aware Diffusion, NeurIPS 2023 (spotlight)☆559Jan 6, 2024Updated 2 years ago
- [3DV-2025] Official implementation of "Controllable Text-to-3D Generation via Surface-Aligned Gaussian Splatting"☆213Jun 21, 2024Updated last year
- CustomDiffusion360: Customizing Text-to-Image Diffusion with Camera Viewpoint Control☆172Dec 2, 2024Updated last year
- [CVPR'24] Text-to-3D Generation with Bidirectional Diffusion using both 2D and 3D priors☆169Mar 13, 2024Updated last year
- ☆574Nov 19, 2023Updated 2 years ago
- ☆527Nov 29, 2023Updated 2 years ago
- ☆273May 31, 2024Updated last year
- Large Gaussian Reconstruction Model for Efficient 3D Reconstruction and Generation☆630Apr 4, 2024Updated last year
- [CVPR2024 (Highlight)] RichDreamer: A Generalizable Normal-Depth Diffusion Model for Detail Richness in Text-to-3D. Live Demo:https:/…☆475Sep 27, 2024Updated last year
- 3D-Adapter: Geometry-Consistent Multi-View Diffusion for High-Quality 3D Generation☆343Dec 25, 2024Updated last year
- An open-source impl. of Large Reconstruction Models☆1,200May 6, 2024Updated last year
- Code repository for Zero123++: a Single Image to Consistent Multi-view Diffusion Base Model.☆2,009Feb 23, 2024Updated 2 years ago
- [ECCV 2024 Oral] LGM: Large Multi-View Gaussian Model for High-Resolution 3D Content Creation.☆2,038Aug 20, 2024Updated last year
- [NeurIPS 2024] Geometry-Aware Large Reconstruction Model for Efficient and High-Quality 3D Generation☆173Sep 30, 2024Updated last year
- [ECCV'24] GeoWizard: Unleashing the Diffusion Priors for 3D Geometry Estimation from a Single Image☆931Dec 7, 2024Updated last year
- [CVPR'24] Consistent Novel View Synthesis without 3D Representation☆167Aug 27, 2024Updated last year
- Text-to-3D Generation within 5 Minutes☆729Mar 10, 2024Updated last year
- Ouroboros3D: Image-to-3D Generation via 3D-aware Recursive Diffusion (CVPR2025)☆145Oct 22, 2025Updated 4 months ago
- [ECCV 2024] Single Image to 3D Textured Mesh in 10 seconds with Convolutional Reconstruction Model.☆684Nov 28, 2024Updated last year
- ☆131Aug 10, 2024Updated last year
- Official implementation of `AToM: Amortized Text-to-Mesh using 2D Diffusion`☆84Dec 10, 2025Updated 2 months ago
- [CVPR 2024] Paint3D: Paint Anything 3D with Lighting-Less Texture Diffusion Models, a no lighting baked texture generative model☆792Nov 5, 2024Updated last year
- "4DGen: Grounded 4D Content Generation with Spatial-temporal Consistency", Yuyang Yin*, Dejia Xu*, Zhangyang Wang, Yao Zhao, Yunchao Wei☆249Jun 24, 2024Updated last year
- Official implementation of Repaint123: Fast and High-quality One Image to 3D Generation with Progressive Controllable 2D Repainting (ECCV…☆278Jan 21, 2026Updated last month
- [CVPR 2024] EpiDiff: Enhancing Multi-View Synthesis via Localized Epipolar-Constrained Diffusion☆137Aug 30, 2024Updated last year
- Official code for NeurIPS 2024 paper LRM-Zero: Training Large Reconstruction Models with Synthesized Data☆153Oct 7, 2024Updated last year
- [CVPR 2024] MVD-Fusion: Single-view 3D via Depth-consistent Multi-view Generation☆131Apr 29, 2024Updated last year
- Code repository for "ZeroShape: Regression-based Zero-shot Shape Reconstruction".☆137Jul 18, 2024Updated last year
- Source code of paper "NVS-Solver: Video Diffusion Model as Zero-Shot Novel View Synthesizer"☆313Mar 30, 2025Updated 11 months ago
- [ICLR 2024] Official Implementation of Consistent4D: Consistent 360° Dynamic Object Generation from Monocular Video☆279Nov 14, 2024Updated last year