showlab / Multi-human-Talking-Video-DatasetLinks
Official repository for Muti-human Interactive Talking Dataset
☆44Updated 3 weeks ago
Alternatives and similar repositories for Multi-human-Talking-Video-Dataset
Users that are interested in Multi-human-Talking-Video-Dataset are comparing it to the libraries listed below
Sorting:
- The Best of Both Worlds: Integrating Language Models and Diffusion Models for Video Generation☆34Updated 4 months ago
- [arXiv'24] Holistic-Motion2D: Scalable Whole-body Human Motion Generation in 2D Space☆45Updated 10 months ago
- RealisMotion: Decomposed Human Motion Control and Video Generation in the World Space☆30Updated 3 weeks ago
- Official repository for HOComp: Interaction-Aware Human-Object Composition☆21Updated last month
- Repo for "Human-Centric Foundation Models: Perception, Generation and Agentic Modeling" (https://arxiv.org/abs/2502.08556)☆52Updated 6 months ago
- This is the official implementation for DragVideo☆52Updated 11 months ago
- DanceCamera3D: 3D Camera Movement Synthesis with Music and Dance. [CVPR 2024] Official PyTorch implementation☆110Updated last year
- [ECCV 2024] Dyadic Interaction Modeling for Social Behavior Generation☆59Updated 4 months ago
- [CVPR 2025] A Hierarchical Movie Level Dataset for Long Video Generation☆68Updated 5 months ago
- HyperMotion is a pose guided human image animation framework based on a large-scale video diffusion Transformer.☆113Updated last month
- Diffusion Powers Video Tokenizer for Comprehension and Generation (CVPR 2025)☆74Updated 6 months ago
- Video-GPT via Next Clip Diffusion.☆39Updated 3 months ago
- OpenTMA: support text-motion alignment for HumanML3D, Motion-X, and UniMoCap☆43Updated last year
- ☆85Updated last year
- Towards Variable and Coordinated Holistic Co-Speech Motion Generation, CVPR 2024☆57Updated last year
- Awesome Controllable Video Generation with Diffusion Models☆55Updated last month
- [ICME 2025] DiffusionTalker: Efficient and Compact Speech-Driven 3D Talking Head via Personalizer-Guided Distillation☆20Updated 5 months ago
- MMHead: Towards Fine-grained Multi-modal 3D Facial Animation (ACM MM 2024)☆29Updated 4 months ago
- Benchmark dataset and code of MSRVTT-Personalization☆46Updated 2 months ago
- Training-free Guidance in Text-to-Video Generation via Multimodal Planning and Structured Noise Initialization☆23Updated 4 months ago
- DreamCinema: Cinematic Transfer with Free Camera and 3D Character☆96Updated 2 months ago
- ☆65Updated 5 months ago
- [NeurIPS 2024 Spotlight] The official implement of research paper "MotionBooth: Motion-Aware Customized Text-to-Video Generation"☆137Updated 10 months ago
- Official PyTorch implementation - Video Motion Transfer with Diffusion Transformers☆71Updated last month
- (AAAI2024) Controllable 3D Face Generation with Conditional Style Code Diffusion☆38Updated last year
- PyTorch implementation of DiffMoE, TC-DiT, EC-DiT and Dense DiT☆127Updated 4 months ago
- DanceTogether! Identity-Preserving Multi-Person Interactive Video Generation☆34Updated last month
- ☆59Updated last month
- Code for the paper "Joint Co-Speech Gesture and Expressive Talking Face Generation using Diffusion with Adapters"☆22Updated 7 months ago
- [AAAI 2023 Summer Symposium, Best Paper Award] Taming Diffusion Models for Music-driven Conducting Motion Generation☆26Updated last year