3DTopia / Phidias-DiffusionLinks
[ICLR 2025] Phidias: A Generative Model for Creating 3D Content from Text, Image, and 3D Conditions with Reference-Augmented Diffusion
☆279Updated 6 months ago
Alternatives and similar repositories for Phidias-Diffusion
Users that are interested in Phidias-Diffusion are comparing it to the libraries listed below
Sorting:
- [CVPR 2025] MVPaint: Synchronized Multi-View Diffusion for Painting Anything 3D☆230Updated 3 weeks ago
- Interactive Text-to-Texture Synthesis via Unified Depth-aware Inpainting.☆214Updated last year
- [SIGGRAPH 2025] LayerPano3D: Layered 3D Panorama for Hyper-Immersive Scene Generation"☆287Updated last month
- [TPAMI 2025] DreamCraft3D++: Efficient Hierarchical 3D Generation with Multi-Plane Reconstruction Model☆164Updated 8 months ago
- [ICLR 2025] Official implementation of "DiffSplat: Repurposing Image Diffusion Models for Scalable 3D Gaussian Splat Generation".☆405Updated 2 weeks ago
- [AAAI 2025🔥] Official implementation of Cycle3D: High-quality and Consistent Image-to-3D Generation via Generation-Reconstruction Cycle☆209Updated 6 months ago
- Code for RealmDreamer: Text-Driven 3D Scene Generation with Inpainting and Depth Diffusion [3DV 2025]☆288Updated 5 months ago
- [MM24] Official codes and datasets for ACM MM24 paper "Hi3D: Pursuing High-Resolution Image-to-3D Generation with Video Diffusion Models"…☆277Updated last year
- Offical codes for "AutoVFX: Physically Realistic Video Editing from Natural Language Instructions."☆322Updated 5 months ago
- Ouroboros3D: Image-to-3D Generation via 3D-aware Recursive Diffusion (CVPR2025)☆134Updated 6 months ago
- [CVPR 2025 Highlight] Material Anything: Generating Materials for Any 3D Object via Diffusion☆310Updated 3 weeks ago
- [ICLR 2025] DreamCatalyst: Fast and High-Quality 3D Editing via Controlling Editability and Identity Preservation☆92Updated 7 months ago
- [ICCV 2025] Official code for AnimateAnyMesh: A Feed-Forward 4D Foundation Model for Text-Driven Universal Mesh Animation☆219Updated last week
- [ICCV 2025] Diffuman4D: 4D Consistent Human View Synthesis from Sparse-View Videos with Spatio-Temporal Diffusion Models☆473Updated this week
- Official code for paper: Scaling Mesh Generation via Compressive Tokenization [CVPR'25]☆270Updated last month
- [CVPR'24] Interactive3D: Create What You Want by Interactive 3D Generation☆187Updated 2 months ago
- [ICLR'25] Official Implementation for Consistent Flow Distillation for Text-to-3D Generation☆190Updated 8 months ago
- Uni3C: Unifying Precisely 3D-Enhanced Camera and Human Motion Controls for Video Generation☆389Updated last month
- [NeurIPS 2024] Animate3D: Animating Any 3D Model with Multi-view Video Diffusion☆204Updated 10 months ago
- Official repository for the paper "CAP4D: Creating Animatable 4D Portrait Avatars with Morphable Multi-View Diffusion Models"☆263Updated last week
- Official Implementation of [AnimaX: Animating the Inanimate in 3D with Joint Video-Pose Diffusion Models]☆273Updated 2 months ago
- 3D-Adapter: Geometry-Consistent Multi-View Diffusion for High-Quality 3D Generation☆336Updated 8 months ago
- Unleashing Vecset Diffusion Model for Fast Shape Generation / within 1 Second (ICCV 2025 Highlight)☆269Updated last month
- [CVPR 2025] StdGEN: Semantic-Decomposed 3D Character Generation from Single Images☆359Updated 5 months ago
- ☆293Updated 11 months ago
- The implementation of Extreme Viewpoint 4D Video Generation☆238Updated last week
- [SIGGRAPH 2025] PrimitiveAnything: Human-Crafted 3D Primitive Assembly Generation with Auto-Regressive Transformer☆354Updated 4 months ago
- GenXD: Generating Any 3D and 4D Scenes. ICLR 2025☆211Updated 5 months ago
- ☆305Updated last month
- ☆256Updated last year