vivoCameraResearch / Hyper-MotionLinks
HyperMotion is a pose guided human image animation framework based on a large-scale video diffusion Transformer.
☆131Updated 6 months ago
Alternatives and similar repositories for Hyper-Motion
Users that are interested in Hyper-Motion are comparing it to the libraries listed below
Sorting:
- [ICCV 2025] MagicMirror: ID-Preserved Video Generation in Video Diffusion Transformers☆128Updated 6 months ago
- [AAAI'25] Official implementation of Image Conductor: Precision Control for Interactive Video Synthesis☆100Updated last year
- ICLR 2025 paper X-NeMo & Project X-Portrati2☆110Updated 5 months ago
- Code of RealisHuman: A Two-Stage Approach for Refining Malformed Human Parts in Generated Images☆93Updated last year
- AniCrafter: Customizing Realistic Human-Centric Animation via Avatar-Background Conditioning in Video Diffusion Models☆133Updated 2 weeks ago
- Official repository for HOComp: Interaction-Aware Human-Object Composition☆29Updated last month
- [CVPR'25] StyleMaster: Stylize Your Video with Artistic Generation and Translation☆165Updated 2 months ago
- Concat-ID: Towards Universal Identity-Preserving Video Synthesis☆65Updated 8 months ago
- This is the project for 'Any2Caption', Interpreting Any Condition to Caption for Controllable Video Generation☆49Updated 9 months ago
- [arXiv'25] AnyCharV: Bootstrap Controllable Character Video Generation with Fine-to-Coarse Guidance☆40Updated 11 months ago
- ☆66Updated last year
- [AAAI 2026] FantasyTalking2: Timestep-Layer Adaptive Preference Optimization for Audio-Driven Portrait Animation☆63Updated 5 months ago
- Official implementation of "Perception-as-Control: Fine-grained Controllable Image Animation with 3D-aware Motion Representation" (ICCV 2…☆79Updated 5 months ago
- FantasyID: Face Knowledge Enhanced ID-Preserving Video Generation☆78Updated 5 months ago
- MotionShop: Zero-Shot Motion Transfer in Video Diffusion Models with Mixture of Score Guidance☆26Updated last year
- ☆91Updated last year
- ☆52Updated 2 weeks ago
- Blending Custom Photos with Video Diffusion Transformers☆48Updated last year
- [ICCV 2025] Edicho: Consistent Image Editing in the Wild☆124Updated 2 months ago
- [ECCV 2024] IDOL: Unified Dual-Modal Latent Diffusion for Human-Centric Joint Video-Depth Generation☆56Updated last year
- This respository contains the code for the NeurIPS 2024 paper SF-V: Single Forward Video Generation Model.☆99Updated last year
- [ECCV 2024] Noise Calibration: Plug-and-play Content-Preserving Video Enhancement using Pre-trained Video Diffusion Models☆88Updated last year
- [CVPR 2025] Official code for "Synergizing Motion and Appearance: Multi-Scale Compensatory Codebooks for Talking Head Video Generation"☆65Updated 7 months ago
- [ACM MM24] MotionMaster: Training-free Camera Motion Transfer For Video Generation☆98Updated last year
- [Neurips 2025'] VividFace: A Diffusion-Based Hybrid Framework for High-Fidelity Video Face Swapping☆67Updated 3 months ago
- Official code for VividPose: Advancing Stable Video Diffusion for Realistic Human Image Animation.☆85Updated last year
- [SIGGRAPH ASIA'25] BlobCtrl: Taming Controllable Blob for Element-level Image Editing☆26Updated 2 months ago
- Official implementation of ICCV 2025 paper - CharaConsist: Fine-Grained Consistent Character Generation☆141Updated 5 months ago
- [NeurIPS 2024 Spotlight] The official implement of research paper "MotionBooth: Motion-Aware Customized Text-to-Video Generation"☆138Updated last year
- Phantom-Data: Towards a General Subject-Consistent Video Generation Dataset☆101Updated 2 months ago