IGL-HKUST / DiffusionAsShaderLinks
[SIGGRAPH 2025] Diffusion as Shader: 3D-aware Video Diffusion for Versatile Video Generation Control
☆760Updated 4 months ago
Alternatives and similar repositories for DiffusionAsShader
Users that are interested in DiffusionAsShader are comparing it to the libraries listed below
Sorting:
- [ICCV 2025, Oral] TrajectoryCrafter: Redirecting Camera Trajectory for Monocular Videos via Diffusion Models☆776Updated 2 months ago
- [ICLR'25] SynCamMaster: Synchronizing Multi-Camera Video Generation from Diverse Viewpoints☆631Updated 5 months ago
- ☆597Updated last year
- Uni3C: Unifying Precisely 3D-Enhanced Camera and Human Motion Controls for Video Generation [Siggraph Asian 2025]☆425Updated last month
- [ICCV 2025] Light-A-Video: Training-free Video Relighting via Progressive Light Fusion☆478Updated this week
- Offical codes for "AutoVFX: Physically Realistic Video Editing from Natural Language Instructions."☆325Updated 6 months ago
- [ICLR 2025] Official implementation of "DiffSplat: Repurposing Image Diffusion Models for Scalable 3D Gaussian Splat Generation".☆427Updated 2 months ago
- Pippo: High-Resolution Multi-View Humans from a Single Image☆609Updated 6 months ago
- Code release for https://kovenyu.com/WonderWorld/☆658Updated 6 months ago
- PhysGen: Rigid-Body Physics-Grounded Image-to-Video Generation (ECCV 2024)☆313Updated last year
- [ICCV 2025] Diffuman4D: 4D Consistent Human View Synthesis from Sparse-View Videos with Spatio-Temporal Diffusion Models☆520Updated last month
- [ICLR 2025] Phidias: A Generative Model for Creating 3D Content from Text, Image, and 3D Conditions with Reference-Augmented Diffusion☆281Updated 7 months ago
- Simple Controlnet module for CogvideoX model.☆171Updated 9 months ago
- Generate large-scale explorable 3D scenes with high-quality panorama videos from a single image or text prompt.☆543Updated last month
- [SIGGRAPH 2025] Official code of the paper "FlexiAct: Towards Flexible Action Control in Heterogeneous Scenarios"☆338Updated 2 months ago
- [CVPR 2025 Highlight] GEN3C: 3D-Informed World-Consistent Video Generation with Precise Camera Control☆1,133Updated last month
- [CVPR 2025] StdGEN: Semantic-Decomposed 3D Character Generation from Single Images☆361Updated 6 months ago
- [ICLR'25] Official PyTorch implementation of "Framer: Interactive Frame Interpolation".☆495Updated 9 months ago
- [CVPR 2025 Highlight] Material Anything: Generating Materials for Any 3D Object via Diffusion☆316Updated 2 months ago
- [MM24] Official codes and datasets for ACM MM24 paper "Hi3D: Pursuing High-Resolution Image-to-3D Generation with Video Diffusion Models"…☆276Updated last year
- [CVPR 2025] MVPaint: Synchronized Multi-View Diffusion for Painting Anything 3D☆235Updated 2 months ago
- [CVPR 2025] MIDI: Multi-Instance Diffusion for Single Image to 3D Scene Generation☆823Updated 4 months ago
- ☆297Updated last year
- [NeurIPS 2024] Official code for "Neural Gaffer: Relighting Any Object via Diffusion"☆328Updated 4 months ago
- [ICCV'25]DimensionX: Create Any 3D and 4D Scenes from a Single Image with Controllable Video Diffusion☆1,299Updated last week
- CVPR2025:AnimateAnything☆186Updated 4 months ago
- [T-PAMI 2025] V3D: Video Diffusion Models are Effective 3D Generators☆503Updated last year
- Code for RealmDreamer: Text-Driven 3D Scene Generation with Inpainting and Depth Diffusion [3DV 2025]☆291Updated 7 months ago
- [ICCV 2025] FaceLift: Learning Generalizable Single Image 3D Face Reconstruction from Synthetic Heads☆430Updated last week
- [CVPR 2025 Highlight] Official implementation of "Make-It-Animatable: An Efficient Framework for Authoring Animation-Ready 3D Characters"☆330Updated 5 months ago