Sindhu-Hegde / you_said_that
☆8Updated last year
Related projects ⓘ
Alternatives and complementary repositories for you_said_that
- Facial Expression Feature Extractor☆67Updated 2 years ago
- You Said That?: Synthesising Talking Faces from Audio☆69Updated 6 years ago
- Official pytorch implementation for Learning to Listen: Modeling Non-Deterministic Dyadic Facial Motion (CVPR 2022)☆108Updated 3 months ago
- ☆35Updated 6 years ago
- PATS Dataset. Aligned Pose-Audio-Transcripts and Style for co-speech gesture research☆53Updated last year
- CVPR 2022: Cross-Modal Perceptionist: Can Face Geometry be Gleaned from Voices?☆126Updated last year
- Official github repo for paper "What comprises a good talking-head video generation?: A Survey and Benchmark"☆90Updated last year
- ☆98Updated 8 months ago
- ☆86Updated last year
- ☆29Updated 5 months ago
- Image Animation with Perturbed Masks☆12Updated 2 years ago
- ☆72Updated last year
- This github contains the network architectures of NeuralVoicePuppetry.☆78Updated 3 years ago
- Implementation of "JOKR: Joint Keypoint Representation for Unsupervised Cross-Domain Motion Retargeting"☆50Updated 2 years ago
- Code for "Audio-Driven Co-Speech Gesture Video Generation" (NeurIPS 2022, Spotlight Presentation).☆84Updated last year
- VariTex: Variational Neural Face Textures, ICCV 2021.☆68Updated 8 months ago
- ☆93Updated 2 years ago
- An improved version of APB2Face: Real-Time Audio-Guided Multi-Face Reenactment☆82Updated 3 years ago
- Code for "Predicting Personalized Head Movement from Short Video and Speech Signal" (TMM)☆16Updated last year
- Official Repository for the paper Style Transfer for Co-Speech Gesture Animation: A Multi-Speaker Conditional-Mixture Approach published …☆29Updated 4 months ago
- [ICCV 2021] The official repo for the paper "Speech Drives Templates: Co-Speech Gesture Synthesis with Learned Templates".☆88Updated last year
- Pytorch implementation of our paper "Few-Shot Human Motion Transfer by Personalized Geometry and Texture Modeling"☆71Updated last year
- This is official inference code of PD-FGC☆82Updated last year
- Official pytorch implementation for "APB2Face: Audio-guided face reenactment with auxiliary pose and blink signals", ICASSP'20☆63Updated 3 years ago
- ☆104Updated 2 years ago
- MEAD: A Large-scale Audio-visual Dataset for Emotional Talking-face Generation [ECCV2020]☆246Updated 4 months ago
- The repository for paper Unsupervised Volumetric Animation☆69Updated last year
- DeepFaceFlow: In-the-wild Dense 3D Facial Motion Estimation☆80Updated 4 years ago
- This dataset contains 3D reconstructions of the MEAD dataset.☆13Updated last year
- This is the official implementation for IVA'20 Best Paper Award paper "Let's Face It: Probabilistic Multi-modal Interlocutor-aware Gener…☆16Updated last year