sasanasadiabadi / speech_animation
☆24Updated 6 years ago
Related projects ⓘ
Alternatives and complementary repositories for speech_animation
- A Deep Learning Approach for Generalized Speech Animation☆32Updated 6 years ago
- USC CS621 Course Project☆26Updated last year
- Code for the paper "End-to-end Learning for 3D Facial Animation from Speech"☆70Updated 2 years ago
- ECE 535 - Course Project, Deep Learning Framework☆75Updated 6 years ago
- ☆80Updated 6 years ago
- ObamaNet fork☆12Updated 5 years ago
- ☆35Updated 6 years ago
- Generating Talking Face Landmarks from Speech☆156Updated last year
- ☆48Updated last year
- Crystal TTVS engine is a real-time audio-visual Multilingual speech synthesizer with a 3D expressive avatar.☆84Updated 4 years ago
- Official github repo for paper "What comprises a good talking-head video generation?: A Survey and Benchmark"☆90Updated last year
- You Said That?: Synthesising Talking Faces from Audio☆69Updated 6 years ago
- Speech-conditioned face generation using Generative Adversarial Networks (ICASSP 2019)☆56Updated 2 years ago
- ML-driven tongue animation (CVPR'22)☆41Updated 2 years ago
- Official pytorch implementation for "APB2Face: Audio-guided face reenactment with auxiliary pose and blink signals", ICASSP'20☆63Updated 3 years ago
- Speech-conditioned face generation using Generative Adversarial Networks☆87Updated last year
- Talking Face Generation by Conditional Recurrent Adversarial Network☆61Updated 4 years ago
- Unsupervised Any-to-many Audiovisual Synthesis via Exemplar Autoencoders☆120Updated 2 years ago
- implementation based on "Audio-Driven Facial Animation by Joint End-to-End Learning of Pose and Emotion"☆160Updated 4 years ago
- Sequential Learning for Dance generation☆21Updated 3 years ago
- ☆191Updated 3 years ago
- My experiments in lip reading using deep learning with the LRW dataset☆51Updated 3 years ago
- An improved version of APB2Face: Real-Time Audio-Guided Multi-Face Reenactment☆82Updated 3 years ago
- Learning Lip Sync of Obama from Speech Audio☆67Updated 4 years ago
- This is the official implementation of the paper "Speech2AffectiveGestures: Synthesizing Co-Speech Gestures with Generative Adversarial A…☆44Updated last year
- The official implementation for ICMI 2020 Best Paper Award "Gesticulator: A framework for semantically-aware speech-driven gesture gener…☆122Updated last year
- 2.5D visual sound dataset☆92Updated 3 years ago
- This is the official implementation for IVA '19 paper "Analyzing Input and Output Representations for Speech-Driven Gesture Generation".☆107Updated last year
- Pytorch implementation of Dance Dance Generation: Motion Transfer for Internet Videos☆43Updated 5 years ago
- Code for sound synthesis☆50Updated 6 years ago