IGL-HKUST / DiffusionAsShaderLinks
[SIGGRAPH 2025] Diffusion as Shader: 3D-aware Video Diffusion for Versatile Video Generation Control
☆804Updated 8 months ago
Alternatives and similar repositories for DiffusionAsShader
Users that are interested in DiffusionAsShader are comparing it to the libraries listed below
Sorting:
- [ICLR'25] SynCamMaster: Synchronizing Multi-Camera Video Generation from Diverse Viewpoints☆679Updated 8 months ago
- ☆636Updated last year
- [ICCV 2025, Oral] TrajectoryCrafter: Redirecting Camera Trajectory for Monocular Videos via Diffusion Models☆832Updated last month
- Uni3C: Unifying Precisely 3D-Enhanced Camera and Human Motion Controls for Video Generation [Siggraph Asian 2025]☆494Updated 4 months ago
- [ICCV 2025] Light-A-Video: Training-free Video Relighting via Progressive Light Fusion☆501Updated 3 months ago
- [ICLR 2025] Phidias: A Generative Model for Creating 3D Content from Text, Image, and 3D Conditions with Reference-Augmented Diffusion☆289Updated 3 months ago
- Offical codes for "AutoVFX: Physically Realistic Video Editing from Natural Language Instructions."☆329Updated 10 months ago
- Code release for https://kovenyu.com/WonderWorld/☆708Updated 9 months ago
- Pippo: High-Resolution Multi-View Humans from a Single Image☆630Updated 10 months ago
- [ICLR 2025] Official implementation of "DiffSplat: Repurposing Image Diffusion Models for Scalable 3D Gaussian Splat Generation".☆470Updated 5 months ago
- [ICLR'25] Official PyTorch implementation of "Framer: Interactive Frame Interpolation".☆502Updated last year
- Simple Controlnet module for CogvideoX model.☆178Updated last year
- PhysGen: Rigid-Body Physics-Grounded Image-to-Video Generation (ECCV 2024)☆335Updated last year
- [SIGGRAPH 2025] Official code of the paper "FlexiAct: Towards Flexible Action Control in Heterogeneous Scenarios"☆344Updated 3 months ago
- [ICCV'25]DimensionX: Create Any 3D and 4D Scenes from a Single Image with Controllable Video Diffusion☆1,327Updated 3 months ago
- [CVPR 2025] MVPaint: Synchronized Multi-View Diffusion for Painting Anything 3D☆245Updated 5 months ago
- [ICCV 2025] Diffuman4D: 4D Consistent Human View Synthesis from Sparse-View Videos with Spatio-Temporal Diffusion Models☆564Updated last week
- [CVPR 2025] StdGEN: Semantic-Decomposed 3D Character Generation from Single Images☆369Updated 10 months ago
- [NeurIPS D&B Track 2024] Official implementation of HumanVid☆344Updated 3 months ago
- Official implementation of ATI: Any Trajectory Instruction for Controllable Video Generation. https://arxiv.org/pdf/2505.22944☆336Updated 6 months ago
- [NeurIPS 2024] Official code for "Neural Gaffer: Relighting Any Object via Diffusion"☆337Updated 8 months ago
- [CVPR 2025 Highlight] Material Anything: Generating Materials for Any 3D Object via Diffusion☆337Updated last month
- Code for RealmDreamer: Text-Driven 3D Scene Generation with Inpainting and Depth Diffusion [3DV 2025]☆295Updated 10 months ago
- The official code of Yume☆607Updated 3 weeks ago
- CVPR2025:AnimateAnything☆186Updated 8 months ago
- ☆303Updated last year
- Code for "FlashWorld: High-quality 3D Scene Generation within Seconds" (ICLR 2026)☆666Updated last week
- MotionStream: Real-Time Video Generation with Interactive Motion Controls☆497Updated this week
- [T-PAMI 2025] V3D: Video Diffusion Models are Effective 3D Generators☆514Updated last year
- [MM24] Official codes and datasets for ACM MM24 paper "Hi3D: Pursuing High-Resolution Image-to-3D Generation with Video Diffusion Models"…☆282Updated last year