vivoCameraResearch / Hyper-MotionLinks
HyperMotion is a pose guided human image animation framework based on a large-scale video diffusion Transformer.
☆128Updated 5 months ago
Alternatives and similar repositories for Hyper-Motion
Users that are interested in Hyper-Motion are comparing it to the libraries listed below
Sorting:
- AniCrafter: Customizing Realistic Human-Centric Animation via Avatar-Background Conditioning in Video Diffusion Models☆131Updated 5 months ago
- [ICCV 2025] MagicMirror: ID-Preserved Video Generation in Video Diffusion Transformers☆128Updated 6 months ago
- MotionShop: Zero-Shot Motion Transfer in Video Diffusion Models with Mixture of Score Guidance☆26Updated last year
- Code of RealisHuman: A Two-Stage Approach for Refining Malformed Human Parts in Generated Images☆93Updated last year
- Concat-ID: Towards Universal Identity-Preserving Video Synthesis☆65Updated 7 months ago
- Official implementation of "Perception-as-Control: Fine-grained Controllable Image Animation with 3D-aware Motion Representation" (ICCV 2…☆78Updated 4 months ago
- [AAAI 2026] FantasyTalking2: Timestep-Layer Adaptive Preference Optimization for Audio-Driven Portrait Animation☆62Updated 4 months ago
- [CVPR'25] StyleMaster: Stylize Your Video with Artistic Generation and Translation☆162Updated last month
- ☆51Updated 2 weeks ago
- This is the project for 'Any2Caption', Interpreting Any Condition to Caption for Controllable Video Generation☆50Updated 8 months ago
- ☆66Updated last year
- [AAAI'25] Official implementation of Image Conductor: Precision Control for Interactive Video Synthesis☆100Updated last year
- [arXiv'25] AnyCharV: Bootstrap Controllable Character Video Generation with Fine-to-Coarse Guidance☆40Updated 10 months ago
- Blending Custom Photos with Video Diffusion Transformers☆48Updated 11 months ago
- ICLR 2025 paper X-NeMo & Project X-Portrati2☆96Updated 4 months ago
- ☆90Updated last year
- FantasyID: Face Knowledge Enhanced ID-Preserving Video Generation☆77Updated 4 months ago
- [ECCV 2024] Noise Calibration: Plug-and-play Content-Preserving Video Enhancement using Pre-trained Video Diffusion Models☆88Updated last year
- [ECCV 2024] IDOL: Unified Dual-Modal Latent Diffusion for Human-Centric Joint Video-Depth Generation☆56Updated last year
- Official repository for HOComp: Interaction-Aware Human-Object Composition☆27Updated 3 weeks ago
- ☆30Updated 8 months ago
- Unified Video Editing with Temporal Reasoner☆105Updated last week
- [ACM MM24] MotionMaster: Training-free Camera Motion Transfer For Video Generation☆97Updated last year
- [CVPR 2025] Official code for "Synergizing Motion and Appearance: Multi-Scale Compensatory Codebooks for Talking Head Video Generation"☆64Updated 6 months ago
- Phantom-Data: Towards a General Subject-Consistent Video Generation Dataset☆100Updated last month
- This respository contains the code for the NeurIPS 2024 paper SF-V: Single Forward Video Generation Model.☆99Updated last year
- [SIGGRAPH 2025] Official implementation of 'Motion Inversion For Video Customization'☆153Updated last year
- The official implementation of ”RepVideo: Rethinking Cross-Layer Representation for Video Generation“☆123Updated 11 months ago
- Official implementation of ICCV 2025 paper - CharaConsist: Fine-Grained Consistent Character Generation☆139Updated 5 months ago
- [ICCV 2025] Edicho: Consistent Image Editing in the Wild☆123Updated 2 months ago