colorful-liyu / awesome-3D-editingLinks
☆21Updated 2 years ago
Alternatives and similar repositories for awesome-3D-editing
Users that are interested in awesome-3D-editing are comparing it to the libraries listed below
Sorting:
- Official code for DreamEditor: Text-Driven 3D Scene Editing with Neural Fields (Siggraph Asia 2023)☆124Updated last year
- [AAAI'24] Official PyTorch implementation of FocalDreamer: Text-Driven 3D Editing via Focal-Fusion Assembly☆34Updated last year
- naive filter of objaverse☆148Updated last year
- CAD: Photorealistic 3D Generation via Adversarial Distillation (CVPR 2024)☆130Updated last year
- (ICLR2024) This is the official PyTorch implementation of ICLR2024 paper: Text-to-3D with Classifier Score Distillation☆135Updated last year
- Geometry-aware Novel View Synthesis with Pre-trained 2D Prior☆39Updated 2 years ago
- [ECCV 2024] HiFi-123: Towards High-fidelity One Image to 3D Content Generation☆67Updated last year
- [NeurIPS'23] An efficient PyTorch-based library for training 3D-aware image synthesis models.☆95Updated 2 years ago
- Official PyTorch implementation of "A Unified Approach for Text- and Image-guided 4D Scene Generation", [CVPR 2024]☆92Updated last year
- [ECCV 2024] Viewpoint Textual Inversion: Discovering Scene Representations and 3D View Control in 2D Diffusion Models☆110Updated 11 months ago
- [CVPR 2024] "Taming Mode Collapse in Score Distillation for Text-to-3D Generation" by Peihao Wang, Dejia Xu, Zhiwen Fan, Dilin Wang, Srey…☆50Updated last year
- This repo contains the python code as well as the webpage html files for the Vox-E project from VAILab at TAU.☆77Updated 7 months ago
- [ACM MM 2024] The official repo for "DreamLCM: Towards High-Quality Text-to-3D Generation via Latent Consisitency Model"☆16Updated last year
- Code release of our paper "DI-PCG: Diffusion-based Efficient Inverse Procedural Content Generation for High-quality 3D Asset Creation".☆132Updated 7 months ago
- ☆51Updated 7 months ago
- A pytorch implementation of “ X-Mesh: Towards Fast and Accurate Text-driven 3D Stylization via Dynamic Textual Guidance”☆29Updated last year
- VideoMV: Consistent Multi-View Generation Based on Large Video Generative Model☆173Updated last year
- Official PyTorch & Diffusers implementation of "Text-Guided Texturing by Synchronized Multi-View Diffusion"☆174Updated 8 months ago
- Official code repository for the paper: "TAPS3D: Text-Guided 3D Textured Shape Generation from Pseudo Supervision"☆44Updated 2 years ago
- Official code release for the paper: Sketch-guided Text-based 3D Editing☆53Updated 2 years ago
- Official code release for DeformToon3D: Deformable 3D Toonification from Neural Radiance Fields (ICCV 2023)☆55Updated last year
- [CVPR 2024 Highlight] ViVid-1-to-3: Novel View Synthesis with Video Diffusion Models☆173Updated last year
- Unofficial implementation of 2D ProlificDreamer☆144Updated 10 months ago
- Noise Free Score Distillation☆72Updated 2 years ago
- A unified diffusers implementation for MVDream and ImageDream☆112Updated last year
- [ICLR 2024] Enhancing High-Resolution 3D Generation through Pixel-wise Gradient Clipping☆82Updated last year
- ☆45Updated last year
- Implementation of `Viewset Diffusion: (0-)Image-Conditioned 3D Generative Models from 2D Data' (ICCV 2023)☆108Updated last year
- [CVPR 2024] MVD-Fusion: Single-view 3D via Depth-consistent Multi-view Generation☆127Updated last year
- Official code for ECCV 2024 paper: Learn to Optimize Denoising Scores A Unified and Improved Diffusion Prior for 3D Generation☆72Updated last year