ARIA-VALUSPA / AVP
This is the ARIA-VALUSPA Platform, or AVP for short. Use this platform to build your own Virtual Humans with audio-visual input and output, language models for English, French, and German, emotional understanding, and many more. This work was funded by European Union Horizon 2020 research and innovation programme, grant agreement No 645378.
☆32Updated 5 years ago
Alternatives and similar repositories for AVP:
Users that are interested in AVP are comparing it to the libraries listed below
- A classification model in Machine Learning capable of recognizing human facial emotions☆23Updated 7 years ago
- USC CS621 Course Project☆26Updated 2 years ago
- Tool for online Valence and Arousal annotation.☆35Updated 4 years ago
- Build your own Real-time Speech Emotion Recognizer☆112Updated 6 years ago
- Facial Action Unit Pretraining☆29Updated 5 years ago
- Mova: Movement Analytics Platform☆21Updated 8 years ago
- A "talking head" project capable of displaying emotions created using blender and python☆19Updated 6 years ago
- processing and extracting of face and mouth image files out of the TCDTIMIT database☆45Updated 4 years ago
- An avatar simulation for AirSim (https://github.com/Microsoft/AirSim).☆78Updated 2 years ago
- An end-to-end MATLAB toolkit for completely unsupervised Speaker Diarization using state-of-the-art algorithms.☆16Updated 9 years ago
- This is the official implementation for IVA '19 paper "Analyzing Input and Output Representations for Speech-Driven Gesture Generation".☆108Updated last year
- ECE 535 - Course Project, Deep Learning Framework☆75Updated 6 years ago
- You Said That?: Synthesising Talking Faces from Audio☆69Updated 6 years ago
- Code for the paper "End-to-end Learning for 3D Facial Animation from Speech"☆71Updated 2 years ago
- Learning Lip Sync of Obama from Speech Audio☆67Updated 4 years ago
- The official code for our paper "A Framework for Integrating Gesture Generation Models into Interactive Conversational Agents", published…☆34Updated 3 years ago
- Live demo for speech emotion recognition using Keras and Tensorflow models☆39Updated 8 months ago
- Supporting code for "Emotion Recognition in Speech using Cross-Modal Transfer in the Wild"☆101Updated 5 years ago
- Audio-Visual Speech Recognition using Deep Learning☆60Updated 6 years ago
- ☆25Updated 6 years ago
- Keras version of Syncnet, by Joon Son Chung and Andrew Zisserman.☆51Updated 6 years ago
- ☆64Updated 6 years ago
- An Attention Based Open-Source End to End Speech Synthesis Framework, No CNN, No RNN, No MFCC!!!☆85Updated 4 years ago
- Keras version of Realtime Multi-Person Pose Estimation project☆15Updated 6 years ago
- ☆195Updated 3 years ago
- This classifier detect if a person is showing his teeth or not☆3Updated 8 years ago
- The official implementation for ICMI 2020 Best Paper Award "Gesticulator: A framework for semantically-aware speech-driven gesture gener…☆125Updated 2 years ago
- FaceGrabber is introduced in the following paper: D. Merget, T. Eckl, M. Schwörer, P. Tiefenbacher, and G. Rigoll, “Capturing Facial Vide…☆11Updated 8 years ago
- Audio-Visual Speech Recognition using Sequence to Sequence Models☆82Updated 4 years ago
- SoundNet, built in Keras with pre-trained 8-layer model.☆29Updated 5 years ago