jonepatr / genea_visualizer
This repository provides scripts that can be used to visualize BVH files. These scripts were developed for the GENEA Challenge 2020, and enables reproducing the visualizations used for the challenge stimuli. The server consists of several containers which are launched together with the docker-compose.
☆39Updated 2 years ago
Alternatives and similar repositories for genea_visualizer:
Users that are interested in genea_visualizer are comparing it to the libraries listed below
- This is an implementation of Robots learn social skills: End-to-end learning of co-speech gesture generation for humanoid robots.☆73Updated 2 years ago
- This repository contains data pre-processing and visualization scripts used in GENEA Challenge 2022 and 2023. Check the repository's READ…☆23Updated this week
- Scripts for numerical evaluations for the GENEA Gesture Generation Challenge☆23Updated 2 years ago
- This repository contains scripts to build Youtube Gesture Dataset.☆121Updated last year
- Talking with Hands☆88Updated 3 years ago
- This is the official implementation of the paper "Speech2AffectiveGestures: Synthesizing Co-Speech Gestures with Generative Adversarial A…☆45Updated 2 years ago
- The official implementation for ICMI 2020 Best Paper Award "Gesticulator: A framework for semantically-aware speech-driven gesture gener…☆125Updated 2 years ago
- PATS Dataset. Aligned Pose-Audio-Transcripts and Style for co-speech gesture research☆57Updated last year
- This is the official implementation for IVA '19 paper "Analyzing Input and Output Representations for Speech-Driven Gesture Generation".☆109Updated last year
- UnifiedGesture: A Unified Gesture Synthesis Model for Multiple Skeletons (ACM MM 2023 Oral)☆50Updated last year
- [CVPR 2022] Code for "Learning Hierarchical Cross-Modal Association for Co-Speech Gesture Generation"☆136Updated last year
- This is the official repository for our publication "The IVI Lab entry to the GENEA Challenge 2022 – A Tacotron2 Based Method for Co-Spee…☆13Updated last year
- ☆11Updated 4 years ago
- Speech Gesture Generation from the Trimodal Context of Text, Audio, and Speaker Identity (SIGGRAPH Asia 2020)☆255Updated 3 years ago
- Official pytorch implementation for Learning to Listen: Modeling Non-Deterministic Dyadic Facial Motion (CVPR 2022)☆113Updated 6 months ago
- QPGesture: Quantization-Based and Phase-Guided Motion Matching for Natural Speech-Driven Gesture Generation (CVPR 2023 Highlight)☆84Updated last year
- This is the official implementation for IVA'20 Best Paper Award paper "Let's Face It: Probabilistic Multi-modal Interlocutor-aware Gener…☆16Updated 2 years ago
- This is the official implementation for IVA '19 paper "Analyzing Input and Output Representations for Speech-Driven Gesture Generation".☆10Updated 2 years ago
- This is the official implementation of the paper "Text2Gestures: A Transformer-Based Network for Generating Emotive Body Gestures for Vir…☆27Updated 3 years ago
- Official Repository for the paper "No Gestures Left Behind: Learning Relationships between Spoken Language and Freeform Gestures", Findin…☆20Updated 3 years ago
- The ReprGesture entry to the GENEA Challenge 2022 (IMCI 2022)☆15Updated 2 years ago
- ☆196Updated this week
- DiffuseStyleGesture: Stylized Audio-Driven Co-Speech Gesture Generation with Diffusion Models (IJCAI 2023) | The DiffuseStyleGesture+ ent…☆170Updated last year
- [ICCV 2021] The official repo for the paper "Speech Drives Templates: Co-Speech Gesture Synthesis with Learned Templates".☆93Updated last year
- multimodal transformer☆73Updated 3 years ago
- Audio2Motion Official implementation for Audio2Motion: Generating Diverse Gestures from Speech with Conditional Variational Autoencoders.☆128Updated last year
- SGToolkit: An Interactive Gesture Authoring Toolkit for Embodied Conversational Agents (UIST 2021)☆43Updated 2 years ago
- Official PyTorch implementation of the paper "A Brand New Dance Partner:Music-Conditioned Pluralistic Dancing Synthesized by Multiple Dan…☆35Updated 2 years ago
- ☆10Updated 2 years ago
- Fréchet Gesture Distance from (Yoon et al.) exploration and eventual improvment☆18Updated last year