ARIA-VALUSPA / AVP
This is the ARIA-VALUSPA Platform, or AVP for short. Use this platform to build your own Virtual Humans with audio-visual input and output, language models for English, French, and German, emotional understanding, and many more. This work was funded by European Union Horizon 2020 research and innovation programme, grant agreement No 645378.
☆32Updated 4 years ago
Related projects ⓘ
Alternatives and complementary repositories for AVP
- USC CS621 Course Project☆26Updated last year
- A classification model in Machine Learning capable of recognizing human facial emotions☆23Updated 6 years ago
- A "talking head" project capable of displaying emotions created using blender and python☆19Updated 6 years ago
- ☆191Updated 3 years ago
- An avatar simulation for AirSim (https://github.com/Microsoft/AirSim).☆75Updated 2 years ago
- The repository to host the ICCV paper "Realistic Dynamic Facial Textures from a Single Image using GANs"☆22Updated 7 years ago
- LSTM/BOF model to encode Videos. Implementation of our BMVC paper "Story Understanding in Video Advertisements".☆14Updated 4 years ago
- Tool for online Valence and Arousal annotation.☆34Updated 4 years ago
- The official code for our paper "A Framework for Integrating Gesture Generation Models into Interactive Conversational Agents", published…☆34Updated 3 years ago
- An end-to-end MATLAB toolkit for completely unsupervised Speaker Diarization using state-of-the-art algorithms.☆16Updated 8 years ago
- ECE 535 - Course Project, Deep Learning Framework☆75Updated 6 years ago
- A PyTorch implementation of speech recognition based on DeepMind's WaveNet☆18Updated 6 years ago
- Keras version of Syncnet, by Joon Son Chung and Andrew Zisserman.☆51Updated 5 years ago
- Audio Analysis by Conceptor☆30Updated 9 years ago
- A script for audio/transcript alignment. Fork of p2fa.☆69Updated 6 years ago
- SoundNet, built in Keras with pre-trained 8-layer model.☆29Updated 5 years ago
- Using OpenPose in a 3D animation pipeline. Based on the work of @CMU-Perceptual-Computing-Lab, @una-dinosauria, @ArashHosseini, and @keel…☆64Updated 4 years ago
- Code for the paper "End-to-end Learning for 3D Facial Animation from Speech"☆70Updated 2 years ago
- A library for loading, modifying and saving BVH motion capture files.☆12Updated 13 years ago
- FaceGrabber is introduced in the following paper: D. Merget, T. Eckl, M. Schwörer, P. Tiefenbacher, and G. Rigoll, “Capturing Facial Vide…☆11Updated 8 years ago
- A new computer vision algorithm for the recognition of AUs typically seen in most applications, their intensities, and a large number (23…☆17Updated 7 years ago
- An OpenCV demo on detecting whether a person is speaking or not.☆23Updated 12 years ago
- ☆106Updated 7 years ago
- Live demo for speech emotion recognition using Keras and Tensorflow models☆39Updated 3 months ago
- Headbox tool to do facial animation on the Microsoft Rocketbox☆46Updated 2 years ago
- This module aims to extract emotions from audio. The input argument is either an uploaded audio/video file to the server or a URL. The o…☆21Updated 6 years ago
- This is the official implementation for IVA '19 paper "Analyzing Input and Output Representations for Speech-Driven Gesture Generation".☆107Updated last year
- Robust video-based eye tracking using recursive estimation of pupil characteristics☆67Updated 2 years ago
- Audio-Visual Speech Recognition using Deep Learning☆59Updated 6 years ago