mdyao / Awesome-3D-AIGCLinks
A curated list of papers and open-source resources focused on 3D AIGC.
☆333Updated last year
Alternatives and similar repositories for Awesome-3D-AIGC
Users that are interested in Awesome-3D-AIGC are comparing it to the libraries listed below
Sorting:
- Hash3D: Training-free Acceleration for 3D Generation☆181Updated last year
- [NeurIPS 2024] SceneCraft: Layout-Guided 3D Scene Generation☆228Updated 4 months ago
- [NeurIPS 2024] GaussianCube: A Structured and Explicit Radiance Representation for 3D Generative Modeling☆423Updated last year
- A curated list of awesome AIGC 3D papers☆729Updated 5 months ago
- ☆159Updated last year
- [CVPR2024 Oral] EscherNet: A Generative Model for Scalable View Synthesis☆362Updated last year
- (ICCV2023) This is the official PyTorch implementation of ICCV2023 paper: Texture Generation on 3D Meshes with Point-UV Diffusion☆215Updated 2 years ago
- Official implementation of Repaint123: Fast and High-quality One Image to 3D Generation with Progressive Controllable 2D Repainting (ECCV…☆276Updated this week
- [CVPR 2024 Highlight] SceneTex: High-Quality Texture Synthesis for Indoor Scenes via Diffusion Priors☆238Updated last year
- 3D Gaussian Splatting extension of threestudio.☆199Updated last year
- [SIGGRAPH Asia 2024, Best Paper Honorable Mention] This is the official implementation of our SIGGRAPH Asia journal artical: TEXGen: a Ge…☆323Updated last year
- A growing curation of Text-to-3D, Diffusion-to-3D works.☆576Updated this week
- List of papers on 4D Generation.☆318Updated last year
- ☆240Updated last year
- Awesome 3D Stylization - Advances in 3D Neural Stylization☆143Updated 2 months ago
- [NeurIPS 2024] Direct3D: Scalable Image-to-3D Generation via 3D Latent Diffusion Transformer☆222Updated 11 months ago
- [NeurIPS 2024] Diffusion4D: Fast Spatial-temporal Consistent 4D Generation via Video Diffusion Models☆332Updated last year
- [ICCV 2023] Single-Stage Diffusion NeRF☆447Updated last year
- [CVPR2024 (Highlight)] RichDreamer: A Generalizable Normal-Depth Diffusion Model for Detail Richness in Text-to-3D. Live Demo:https:/…☆473Updated last year
- A niche toolkit for 3D computer vision tasks.☆318Updated last week
- [SIGGRAPH Asia 2024] StyleGaussian: Instant 3D Style Transfer with Gaussian Splatting☆202Updated last year
- [CVPR'24] Interactive3D: Create What You Want by Interactive 3D Generation☆187Updated 6 months ago
- Source code of paper "NVS-Solver: Video Diffusion Model as Zero-Shot Novel View Synthesizer"☆311Updated 9 months ago
- DragGAN meets GET3D for interactive mesh generation and editing.☆465Updated 2 years ago
- [NeurIPS 2023] Michelangelo: Conditional 3D Shape Generation based on Shape-Image-Text Aligned Latent Representation☆473Updated last year
- [arXiv 2023] DreamGaussian4D: Generative 4D Gaussian Splatting☆597Updated last year
- [ICLR 2024] Official Implementation of Consistent4D: Consistent 360° Dynamic Object Generation from Monocular Video☆276Updated last year
- 🧙🏻♂️A list of papers curated for you to dive into the Awesome Radiance Field-based 3D Editing.☆488Updated last month
- [SIGGRAPH 2024] Coin3D: Controllable and Interactive 3D Assets Generation with Proxy-Guided Conditioning☆194Updated last year
- A collection of papers on neural field-based inverse rendering.☆252Updated last year