SUDO-AI-3D / zero123plusLinks
Code repository for Zero123++: a Single Image to Consistent Multi-view Diffusion Base Model.
☆1,952Updated last year
Alternatives and similar repositories for zero123plus
Users that are interested in zero123plus are comparing it to the libraries listed below
Sorting:
- Official code for the paper "LucidDreamer: Domain-free Generation of 3D Gaussian Splatting Scenes".☆1,474Updated last year
- Zero-1-to-3: Zero-shot One Image to 3D Object (ICCV 2023)☆2,951Updated last year
- [NeurIPS 2023] Official code of "One-2-3-45: Any Single Image to 3D Mesh in 45 Seconds without Per-Shape Optimization"☆1,685Updated last year
- [ECCV 2024 Oral] LGM: Large Multi-View Gaussian Model for High-Resolution 3D Content Creation.☆1,944Updated last year
- ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation (NeurIPS 2023 Spotlight)☆1,551Updated last year
- Multi-view Diffusion for 3D Generation☆942Updated 2 years ago
- [ICLR24] Official PyTorch Implementation of Magic123: One Image to High-Quality 3D Object Generation Using Both 2D and 3D Diffusion Prior…☆1,606Updated 4 months ago
- [ICLR 2024 Spotlight] SyncDreamer: Generating Multiview-consistent Images from a Single-view Image☆1,003Updated last year
- [ICLR 2024 Oral] Generative Gaussian Splatting for Efficient 3D Content Creation☆4,212Updated last year
- Official implementation of "LucidDreamer: Towards High-Fidelity Text-to-3D Generation via Interval Score Matching"☆813Updated last year
- The code releasing for https://image-dream.github.io/☆789Updated last year
- An open-source impl. of Large Reconstruction Models☆1,160Updated last year
- ☆567Updated last year
- [CVPR 2024] Paint3D: Paint Anything 3D with Lighting-Less Texture Diffusion Models, a no lighting baked texture generative model☆777Updated 11 months ago
- [ICCV 2023] Make-It-3D: High-Fidelity 3D Creation from A Single Image with Diffusion Prior☆1,878Updated last year
- [CVPR 2024] Text-to-3D using Gaussian Splatting☆840Updated last year
- [ECCV 2024] Single Image to 3D Textured Mesh in 10 seconds with Convolutional Reconstruction Model.☆667Updated 10 months ago
- TriplaneGaussian: A new hybrid representation for single-view 3D reconstruction.☆900Updated last year
- [CVPR2024, Highlight] Official code for DragDiffusion☆1,235Updated last year
- ☆766Updated last year
- Official Code for MotionCtrl [SIGGRAPH 2024]☆1,462Updated 7 months ago
- [CVPR 2024] GaussianEditor: Swift and Controllable 3D Editing with Gaussian Splatting☆1,344Updated last year
- [CVPR 2024] 4K4D: Real-Time 4D View Synthesis at 4K Resolution☆1,752Updated last year
- 3D generation code for MVDream☆542Updated last year
- Text-to-3D Generation within 5 Minutes☆717Updated last year
- Instruct-NeRF2NeRF: Editing 3D Scenes with Instructions (ICCV 2023)☆839Updated last year
- [CVPR 2024 - Oral, Best Paper Award Candidate] Marigold: Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation☆2,944Updated 4 months ago
- CLAY: A Controllable Large-scale Generative Model for Creating High-quality 3D Assets☆951Updated last year
- Text2Room generates textured 3D meshes from a given text prompt using 2D text-to-image models (ICCV2023).☆1,072Updated last year
- Lifting ControlNet for Generalized Depth Conditioning☆479Updated last year