showlab / Multi-human-Talking-Video-DatasetLinks
Muti-human Interactive Talking Dataset
☆57Updated 3 months ago
Alternatives and similar repositories for Multi-human-Talking-Video-Dataset
Users that are interested in Multi-human-Talking-Video-Dataset are comparing it to the libraries listed below
Sorting:
- The Best of Both Worlds: Integrating Language Models and Diffusion Models for Video Generation☆37Updated 6 months ago
- Official code of "UniVid: Unifying Vision Tasks with Pre-trained Video Generation Models" WACV2025☆34Updated 2 weeks ago
- HyperMotion is a pose guided human image animation framework based on a large-scale video diffusion Transformer.☆125Updated 4 months ago
- [arXiv'24] Holistic-Motion2D: Scalable Whole-body Human Motion Generation in 2D Space☆47Updated last year
- [CVPR 2025] A Hierarchical Movie Level Dataset for Long Video Generation☆75Updated 8 months ago
- The official UniVerse-1 code.☆106Updated last month
- [ICME 2025] DiffusionTalker: Efficient and Compact Speech-Driven 3D Talking Head via Personalizer-Guided Distillation☆22Updated 8 months ago
- Benchmark dataset and code of MSRVTT-Personalization☆51Updated 2 weeks ago
- Official repository for HOComp: Interaction-Aware Human-Object Composition☆25Updated last month
- [ECCV 2024] Dyadic Interaction Modeling for Social Behavior Generation☆62Updated 7 months ago
- RealisMotion: Decomposed Human Motion Control and Video Generation in the World Space☆34Updated last month
- DanceTogether! Identity-Preserving Multi-Person Interactive Video Generation☆36Updated 3 months ago
- DanceCamera3D: 3D Camera Movement Synthesis with Music and Dance. [CVPR 2024] Official PyTorch implementation☆111Updated last year
- [NeurIPS 2024 Spotlight] The official implement of research paper "MotionBooth: Motion-Aware Customized Text-to-Video Generation"☆138Updated last year
- ☆61Updated 4 months ago
- MMHead: Towards Fine-grained Multi-modal 3D Facial Animation (ACM MM 2024)☆33Updated last month
- This is the official implementation for DragVideo☆55Updated last year
- ☆90Updated last year
- Towards Variable and Coordinated Holistic Co-Speech Motion Generation, CVPR 2024☆58Updated last year
- [ICCV2025] SemTalk Holistic Co-speech Motion Generation with Frame-level Semantic Emphasis☆35Updated last week
- Awesome Controllable Video Generation with Diffusion Models☆58Updated 4 months ago
- [ICCV 2025] MagicMirror: ID-Preserved Video Generation in Video Diffusion Transformers☆127Updated 5 months ago
- An official pytorch implementation of "MoLE: Enhancing Human-centric Text-to-image Diffusion via Mixture of Low-rank Experts"☆34Updated last year
- [AAAI 2023 Summer Symposium, Best Paper Award] Taming Diffusion Models for Music-driven Conducting Motion Generation☆26Updated last year
- [CVPR2024] MotionEditor is the first diffusion-based model capable of video motion editing.☆183Updated 2 months ago
- Video-GPT via Next Clip Diffusion.☆43Updated 5 months ago
- Phantom-Data: Towards a General Subject-Consistent Video Generation Dataset☆94Updated last week
- (AAAI2024) Controllable 3D Face Generation with Conditional Style Code Diffusion☆38Updated last year
- DreamCinema: Cinematic Transfer with Free Camera and 3D Character☆96Updated 5 months ago
- [CVPR 2025] Official code for "Synergizing Motion and Appearance: Multi-Scale Compensatory Codebooks for Talking Head Video Generation"☆64Updated 5 months ago