aim-uofa / GenPercept
[ICLR2025] GenPercept: Diffusion Models Trained with Large Data Are Transferable Visual Models
☆184Updated 3 months ago
Alternatives and similar repositories for GenPercept:
Users that are interested in GenPercept are comparing it to the libraries listed below
- official repo of paper for "CamI2V: Camera-Controlled Image-to-Video Diffusion Model"☆126Updated last month
- Source code of paper "NVS-Solver: Video Diffusion Model as Zero-Shot Novel View Synthesizer"☆291Updated last month
- [NeurIPS 2024] DreamScene4D: Dynamic Multi-Object Scene Generation from Monocular Videos☆225Updated 7 months ago
- Free4D: Tuning-free 4D Scene Generation with Spatial-Temporal Consistency☆163Updated last month
- [NeurIPS 2024] MVInpainter: Learning Multi-View Consistent Inpainting to Bridge 2D and 3D Editing☆113Updated 6 months ago
- Official repository for "SAR3D: Autoregressive 3D Object Generation and Understanding via Multi-scale 3D VQVAE"☆143Updated last month
- Seeing World Dynamics in a Nutshell☆106Updated last month
- Generative Camera Dolly: Extreme Monocular Dynamic Novel View Synthesis (ECCV 2024 Oral) - Official Implementation☆248Updated 6 months ago
- ☆60Updated 3 months ago
- [NeurIPS 2024] Official code for "Splatter a Video: Video Gaussian Representation for Versatile Processing"☆136Updated 3 months ago
- ChronoDepth: Learning Temporally Consistent Video Depth from Video Diffusion Priors