IGL-HKUST / DiffusionAsShaderLinks
[SIGGRAPH 2025] Diffusion as Shader: 3D-aware Video Diffusion for Versatile Video Generation Control
☆715Updated last month
Alternatives and similar repositories for DiffusionAsShader
Users that are interested in DiffusionAsShader are comparing it to the libraries listed below
Sorting:
- [ICLR'25] SynCamMaster: Synchronizing Multi-Camera Video Generation from Diverse Viewpoints☆602Updated 2 months ago
- [ICCV 2025, Oral] TrajectoryCrafter: Redirecting Camera Trajectory for Monocular Videos via Diffusion Models☆716Updated last week
- ☆573Updated last year
- Offical codes for "AutoVFX: Physically Realistic Video Editing from Natural Language Instructions."☆320Updated 3 months ago
- [CVPR 2025 Highlight] GEN3C: 3D-Informed World-Consistent Video Generation with Precise Camera Control☆824Updated last month
- [ICLR 2025] Official implementation of "DiffSplat: Repurposing Image Diffusion Models for Scalable 3D Gaussian Splat Generation".☆387Updated 2 weeks ago
- Uni3C: Unifying Precisely 3D-Enhanced Camera and Human Motion Controls for Video Generation☆351Updated this week
- Pippo: High-Resolution Multi-View Humans from a Single Image☆583Updated 4 months ago
- Code release for https://kovenyu.com/WonderWorld/☆611Updated 3 months ago
- [ICCV 2025] Light-A-Video: Training-free Video Relighting via Progressive Light Fusion☆450Updated last month
- PhysGen: Rigid-Body Physics-Grounded Image-to-Video Generation (ECCV 2024)☆306Updated 9 months ago
- Simple Controlnet module for CogvideoX model.☆166Updated 6 months ago
- [ICLR'25] Official PyTorch implementation of "Framer: Interactive Frame Interpolation".☆486Updated 6 months ago
- [ICLR 2025] Phidias: A Generative Model for Creating 3D Content from Text, Image, and 3D Conditions with Reference-Augmented Diffusion☆275Updated 5 months ago
- [MM24] Official codes and datasets for ACM MM24 paper "Hi3D: Pursuing High-Resolution Image-to-3D Generation with Video Diffusion Models"…☆277Updated 10 months ago
- Code for RealmDreamer: Text-Driven 3D Scene Generation with Inpainting and Depth Diffusion [3DV 2025]☆284Updated 4 months ago
- CVPR2025:AnimateAnything☆183Updated 2 months ago
- [SIGGRAPH 2025] Official code of the paper "FlexiAct: Towards Flexible Action Control in Heterogeneous Scenarios"☆316Updated 2 months ago
- [CVPR 2025 Highlight] Material Anything: Generating Materials for Any 3D Object via Diffusion☆302Updated last month
- [CVPR 2025] StdGEN: Semantic-Decomposed 3D Character Generation from Single Images☆354Updated 4 months ago
- [CVPR 2025] MVPaint: Synchronized Multi-View Diffusion for Painting Anything 3D☆218Updated last week
- [ICCV 2025] Diffuman4D: 4D Consistent Human View Synthesis from Sparse-View Videos with Spatio-Temporal Diffusion Models☆293Updated last week
- ☆286Updated 10 months ago
- [NeurIPS D&B Track 2024] Official implementation of HumanVid☆326Updated 2 months ago
- [ICCV'25]DimensionX: Create Any 3D and 4D Scenes from a Single Image with Controllable Video Diffusion☆1,277Updated 7 months ago
- [NeurIPS 2024] Official code for "Neural Gaffer: Relighting Any Object via Diffusion"☆311Updated last month
- [CVPR 2025 Highlight] Official implementation of "Make-It-Animatable: An Efficient Framework for Authoring Animation-Ready 3D Characters"☆316Updated 2 months ago
- [T-PAMI 2025] V3D: Video Diffusion Models are Effective 3D Generators☆491Updated last year
- HoloPart: Generative 3D Part Amodal Segmentation☆545Updated 3 months ago
- [ICLR'25] Official Implementation for Consistent Flow Distillation for Text-to-3D Generation☆187Updated 6 months ago