IGL-HKUST / DiffusionAsShaderLinks
[SIGGRAPH 2025] Diffusion as Shader: 3D-aware Video Diffusion for Versatile Video Generation Control
☆795Updated 6 months ago
Alternatives and similar repositories for DiffusionAsShader
Users that are interested in DiffusionAsShader are comparing it to the libraries listed below
Sorting:
- [ICLR'25] SynCamMaster: Synchronizing Multi-Camera Video Generation from Diverse Viewpoints☆666Updated 7 months ago
- ☆625Updated last year
- [ICCV 2025, Oral] TrajectoryCrafter: Redirecting Camera Trajectory for Monocular Videos via Diffusion Models☆808Updated last week
- Uni3C: Unifying Precisely 3D-Enhanced Camera and Human Motion Controls for Video Generation [Siggraph Asian 2025]☆472Updated 3 months ago
- Offical codes for "AutoVFX: Physically Realistic Video Editing from Natural Language Instructions."☆328Updated 8 months ago
- [ICCV 2025] Light-A-Video: Training-free Video Relighting via Progressive Light Fusion☆491Updated 2 months ago
- [ICLR 2025] Official implementation of "DiffSplat: Repurposing Image Diffusion Models for Scalable 3D Gaussian Splat Generation".☆457Updated 4 months ago
- Code release for https://kovenyu.com/WonderWorld/☆695Updated 8 months ago
- Simple Controlnet module for CogvideoX model.☆175Updated 11 months ago
- [ICLR'25] Official PyTorch implementation of "Framer: Interactive Frame Interpolation".☆499Updated 11 months ago
- Pippo: High-Resolution Multi-View Humans from a Single Image☆625Updated 8 months ago
- [SIGGRAPH 2025] Official code of the paper "FlexiAct: Towards Flexible Action Control in Heterogeneous Scenarios"☆345Updated 2 months ago
- [ICLR 2025] Phidias: A Generative Model for Creating 3D Content from Text, Image, and 3D Conditions with Reference-Augmented Diffusion☆283Updated last month
- PhysGen: Rigid-Body Physics-Grounded Image-to-Video Generation (ECCV 2024)☆326Updated last year
- [CVPR 2025] StdGEN: Semantic-Decomposed 3D Character Generation from Single Images☆367Updated 8 months ago
- [ICCV 2025] Diffuman4D: 4D Consistent Human View Synthesis from Sparse-View Videos with Spatio-Temporal Diffusion Models☆548Updated this week
- [NeurIPS D&B Track 2024] Official implementation of HumanVid☆344Updated 2 months ago
- ☆303Updated last year
- MotionStream: Real-Time Video Generation with Interactive Motion Controls☆443Updated last month
- Generate large-scale explorable 3D scenes with high-quality panorama videos from a single image or text prompt.☆614Updated last month
- CVPR2025:AnimateAnything☆186Updated 6 months ago
- [CVPR 2025 Highlight] GEN3C: 3D-Informed World-Consistent Video Generation with Precise Camera Control☆1,211Updated 3 months ago
- [T-PAMI 2025] V3D: Video Diffusion Models are Effective 3D Generators☆512Updated last year
- HY-World 1.5: A Systematic Framework for Interactive World Modeling with Real-Time Latency and Geometric Consistency☆752Updated this week
- [CVPR 2025 Highlight] Official implementation of "Make-It-Animatable: An Efficient Framework for Authoring Animation-Ready 3D Characters"☆344Updated 7 months ago
- Code for "FlashWorld: High-quality 3D Scene Generation within Seconds"☆630Updated 3 weeks ago
- [NeurIPS 2024] Official code for "Neural Gaffer: Relighting Any Object via Diffusion"☆331Updated 6 months ago
- [ICCV'25]DimensionX: Create Any 3D and 4D Scenes from a Single Image with Controllable Video Diffusion☆1,321Updated 2 months ago
- [CVPR 2025 Highlight] Material Anything: Generating Materials for Any 3D Object via Diffusion☆332Updated this week
- ViewDiff generates high-quality, multi-view consistent images of a real-world 3D object in authentic surroundings. (CVPR2024).☆377Updated 9 months ago