abedavis / visbeatLinks
Code for making anything dance to anything
☆240Updated 4 years ago
Alternatives and similar repositories for visbeat
Users that are interested in visbeat are comparing it to the libraries listed below
Sorting:
- [ACM MM 20 Oral] PyTorch implementation of Self-supervised Dance Video Synthesis Conditioned on Music☆249Updated 2 years ago
- ☆534Updated 5 years ago
- Pytorch implementation of Dance Dance Generation: Motion Transfer for Internet Videos☆44Updated 5 years ago
- Audio To Body Dynamics, CVPR 2018☆119Updated 6 years ago
- Dataset build for music to dance motion synthesis.☆42Updated 6 years ago
- multimodal transformer☆74Updated 3 years ago
- [ICLR 2021 Spotlight] A Good Image Generator Is What You Need for High-Resolution Video Synthesis☆245Updated 3 years ago
- This repository contains the code for my master thesis on Emotion-Aware Facial Animation☆147Updated 2 years ago
- Code for "The Face of Art: Landmark Detection and Geometric Style in Portraits"☆271Updated 2 years ago
- Pytorch implementation of Animating Landscape (SIGGRAPH Asia 2019)☆183Updated last year
- PyTorch implementation of our graph convolutional network (GCN) for human motion generation from music. Also with paired dance-music data…☆88Updated last year
- ☆60Updated 6 years ago
- A curated list of awesome work on video generation and video representation learning, and related topics.☆77Updated 4 years ago
- Unsupervised Any-to-many Audiovisual Synthesis via Exemplar Autoencoders☆121Updated 2 years ago
- Sequential Learning for Dance generation☆22Updated 4 years ago
- Animating Arbitrary Objects via Deep Motion Transfer☆475Updated 2 years ago
- You Said That?: Synthesising Talking Faces from Audio☆69Updated 7 years ago
- Manipulating the inner representations of StyleGAN2☆107Updated 3 years ago
- PyTorch library for "Neural Painters: A learned differentiable constraint for generating brushstroke paintings"☆141Updated 5 years ago
- AI models that can doodle/sketch☆108Updated 4 years ago
- The official implementation for ICMI 2020 Best Paper Award "Gesticulator: A framework for semantically-aware speech-driven gesture gener…☆127Updated 2 years ago
- ☆246Updated 4 years ago
- A learning-based method for synthesizing time lapse videos of paintings☆70Updated 5 years ago
- The authors' official implementation of GANalyze, a framework for studying cognitive properties such as memorability, aesthetics, and emo…☆132Updated 4 years ago
- A mask-guided method for control over localized regions in StyleGAN2 images.☆159Updated 4 years ago
- pix2pix-Next-Frame-Prediction generates video by recursively generating images with pix2pix.☆33Updated 6 years ago
- Code for "Layered Neural Rendering for Retiming People in Video."☆176Updated 4 years ago
- code for training the models from the paper "Learning Individual Styles of Conversational Gestures"☆384Updated last year
- ☆198Updated 3 years ago
- Video style transfer using feed-forward networks.☆381Updated 5 years ago