VisualComputingInstitute / diffusion-e2e-ftLinks
[WACV'25 Oral] Fine-Tuning Image-Conditional Diffusion Models is Easier than You Think
☆450Updated 7 months ago
Alternatives and similar repositories for diffusion-e2e-ft
Users that are interested in diffusion-e2e-ft are comparing it to the libraries listed below
Sorting:
- [ICLR2025] GenPercept: Diffusion Models Trained with Large Data Are Transferable Visual Models☆197Updated 5 months ago
- Official implementation of Lotus: Diffusion-based Visual Foundation Model for High-quality Dense Prediction☆698Updated 3 months ago
- [CVPR2024 Oral] EscherNet: A Generative Model for Scalable View Synthesis☆341Updated 10 months ago
- ☆283Updated 9 months ago
- Generative Camera Dolly: Extreme Monocular Dynamic Novel View Synthesis (ECCV 2024 Oral) - Official Implementation☆259Updated 8 months ago
- [MM24] Official codes and datasets for ACM MM24 paper "Hi3D: Pursuing High-Resolution Image-to-3D Generation with Video Diffusion Models"…☆276Updated 10 months ago
- [ECCV 2024] Improving 2D Feature Representations by 3D-Aware Fine-Tuning☆287Updated 4 months ago
- ChronoDepth: Learning Temporally Consistent Video Depth from Video Diffusion Priors☆257Updated 4 months ago
- Code for "Director3D: Real-world Camera Trajectory and 3D Scene Generation from Text" (NeurIPS 2024).☆351Updated 4 months ago
- Source code of paper "NVS-Solver: Video Diffusion Model as Zero-Shot Novel View Synthesizer"☆299Updated 3 months ago
- [T-PAMI 2025] V3D: Video Diffusion Models are Effective 3D Generators☆488Updated last year
- High-quality and editable surfel 3D Gaussian generation through native 3D diffusion (ICLR 2025)☆347Updated last month
- ViewDiff generates high-quality, multi-view consistent images of a real-world 3D object in authentic surroundings. (CVPR2024).☆370Updated 3 months ago
- "Diffusion4D: Fast Spatial-temporal Consistent 4D Generation via Video Diffusion Models", Hanwen Liang*, Yuyang Yin*, Dejia Xu, Hanxue Li…☆305Updated 5 months ago
- [ICLR 2025] 3DitScene: Editing Any Scene via Language-guided Disentangled Gaussian Splatting☆242Updated 7 months ago
- Depth Any Video with Scalable Synthetic Data (ICLR 2025)☆492Updated 7 months ago
- Orient Anything, ICML 2025☆292Updated 2 months ago
- [arXiv 2023] DreamGaussian4D: Generative 4D Gaussian Splatting☆570Updated last year
- [ECCV'24] GeoWizard: Unleashing the Diffusion Priors for 3D Geometry Estimation from a Single Image☆891Updated 7 months ago
- [NeurIPS 2024] HDR 3D Scene Editing!☆229Updated 6 months ago
- [CVPR 2025 Highlight] VideoScene: Distilling Video Diffusion Model to Generate 3D Scenes in One Step☆293Updated last week
- Official implementation of L-MAGIC☆130Updated 11 months ago
- [CVPR 2025] Code for Segment Any Motion in Videos☆391Updated last month
- 🍳 [CVPR'24 Highlight] Pytorch implementation of "Taming Stable Diffusion for Text to 360° Panorama Image Generation"☆224Updated last year
- [TPAMI 2025, NeurIPS 2024] Vidu4D: Single Generated Video to High-Fidelity 4D Reconstruction with Dynamic Gaussian Surfels☆363Updated 6 months ago
- [ICCV 2025] VistaDream: Sampling multiview consistent images for single-view scene reconstruction☆451Updated 2 weeks ago
- Official code for the paper: Depth Anything At Any Condition☆246Updated last week
- [ICCV 2025] GeometryCrafter: Consistent Geometry Estimation for Open-world Videos with Diffusion Priors☆348Updated 3 weeks ago
- [CVPR 2025 Highlight] GEN3C: 3D-Informed World-Consistent Video Generation with Precise Camera Control☆810Updated 3 weeks ago
- MVDiffusion: Enabling Holistic Multi-view Image Generation with Correspondence-Aware Diffusion, NeurIPS 2023 (spotlight)☆544Updated last year