[ECCV 2024] Dyadic Interaction Modeling for Social Behavior Generation
☆62Apr 23, 2025Updated 10 months ago
Alternatives and similar repositories for Dyadic-Interaction-Modeling
Users that are interested in Dyadic-Interaction-Modeling are comparing it to the libraries listed below
Sorting:
- ☆105Jul 5, 2023Updated 2 years ago
- Implementation for the paper "Can Language Models Learn to Listen?"☆70Sep 2, 2023Updated 2 years ago
- AgentAvatar: Disentangling Planning, Driving and Rendering for Photorealistic Avatar Agents☆11Dec 4, 2023Updated 2 years ago
- The official implementation of the paper "Affective Faces for Goal-Driven Dyadic Communication."☆15Jan 27, 2023Updated 3 years ago
- Official pytorch implementation for Learning to Listen: Modeling Non-Deterministic Dyadic Facial Motion (CVPR 2022)☆126Aug 18, 2024Updated last year
- [CVPR 2024] FaceTalk: Audio-Driven Motion Diffusion for Neural Parametric Head Models☆237Mar 17, 2024Updated last year
- [BMVC'24] G3FA: Geometry-guided GAN for Face Animation☆20Mar 14, 2025Updated 11 months ago
- ARTalk generates realistic 3D head motions (lip sync, blinking, expressions, head poses) from audio in ⚡ real-time ⚡.☆119Jun 12, 2025Updated 8 months ago
- ☆20Sep 11, 2024Updated last year
- [NeurlPS-2024] The official code of MambaTalk: Efficient Holistic Gesture Synthesis with Selective State Space Models☆75Jan 9, 2026Updated last month
- [CVPR 2025] Official code for "Synergizing Motion and Appearance: Multi-Scale Compensatory Codebooks for Talking Head Video Generation"☆65Jun 6, 2025Updated 8 months ago
- Official implementation of ICCV 2023 Oral Paper "Role-Aware Interaction Generation from Textual Description"☆33Oct 20, 2023Updated 2 years ago
- This is the repository for EmoTalk: Speech-Driven Emotional Disentanglement for 3D Face Animation☆138Jan 28, 2026Updated last month
- 🎓 Update Talking-Face Research Papers Daily☆403Updated this week
- A novel apporach for personalized speech-driven 3D facial animation☆57Apr 26, 2024Updated last year
- ☆47Sep 8, 2025Updated 5 months ago
- ☆21Apr 17, 2024Updated last year
- [ECCV 2024 Oral] EDTalk - Official PyTorch Implementation☆456Sep 29, 2025Updated 5 months ago
- ☆232Sep 5, 2024Updated last year
- ICASSP2024: Adaptive Super Resolution For One-Shot Talking-Head Generation☆180Mar 26, 2024Updated last year
- GDPnet: "Geometry-guided Dense Perspective Network for Speech-Driven Facial Animation." (TVCG 2021)☆11Nov 21, 2021Updated 4 years ago
- Official implementation of EMOPortraits: Emotion-enhanced Multimodal One-shot Head Avatars☆395Apr 8, 2025Updated 10 months ago
- Foundation Models and Data for Human-Human and Human-AI interactions.☆354Dec 13, 2025Updated 2 months ago
- Freetalker: Controllable Speech and Text-Driven Gesture Generation Based on Diffusion Models for Enhanced Speaker Naturalness (ICASSP 202…☆74Feb 20, 2024Updated 2 years ago
- [CVPR'24] DiffSHEG: A Diffusion-Based Approach for Real-Time Speech-driven Holistic 3D Expression and Gesture Generation☆194Apr 30, 2024Updated last year
- ECCV 2024: Controllable Motion Generation through Language Guided Pose Code Editing☆50Dec 20, 2024Updated last year
- [CVPR 2025] MG-MotionLLM: A Unified Framework for Motion Comprehension and Generation across Multiple Granularities☆32Apr 6, 2025Updated 10 months ago
- Official code for ICCV 2023 paper: "Efficient Emotional Adaptation for Audio-Driven Talking-Head Generation".☆300May 30, 2025Updated 9 months ago
- [CVPR 2023] DPE: Disentanglement of Pose and Expression for General Video Portrait Editing☆453Feb 27, 2024Updated 2 years ago
- [CVPR 2024] Arbitrary Motion Style Transfer with Multi-condition Motion Latent Diffusion Model☆82Oct 30, 2024Updated last year
- [ICLR 2025] Pytorch Implementation of "Aligning Motion Generation with Human Perceptions".☆89Apr 27, 2025Updated 10 months ago
- [ECCV2024 offical]KMTalk: Speech-Driven 3D Facial Animation with Key Motion Embedding☆34Jul 12, 2024Updated last year
- Official implementation of the CVPR 2024 paper "FSRT: Facial Scene Representation Transformer for Face Reenactment from Factorized Appear…☆124Oct 28, 2025Updated 4 months ago
- This is the official source for our ICCV 2023 paper "EmoTalk: Speech-Driven Emotional Disentanglement for 3D Face Animation"☆411Feb 23, 2024Updated 2 years ago
- [ACM MM 2025] Ditto: Motion-Space Diffusion for Controllable Realtime Talking Head Synthesis☆714Nov 12, 2025Updated 3 months ago
- The official SpeakerVid-5M data curation code.☆68Jul 23, 2025Updated 7 months ago
- MMHead: Towards Fine-grained Multi-modal 3D Facial Animation (ACM MM 2024)☆35Feb 1, 2026Updated last month
- ☆84Sep 1, 2024Updated last year
- MM2022 Workshop-Perceptual Conversational Head Generation with Regularized Driver and Enhanced Renderer☆55May 16, 2024Updated last year