google-research / mintLinks
Multi-modal Content Creation Model Training Infrastructure including the FACT model (AI Choreographer) implementation.
☆546Updated 3 years ago
Alternatives and similar repositories for mint
Users that are interested in mint are comparing it to the libraries listed below
Sorting:
- API to support AIST++ Dataset: https://google.github.io/aistplusplus_dataset☆380Updated 2 years ago
- Code for CVPR 2022 paper "Bailando: 3D dance generation via Actor-Critic GPT with Choreographic Memory"☆421Updated 2 years ago
- Official PyTorch implementation of the paper "TEACH: Temporal Action Compositions for 3D Humans" [3DV 2022]☆396Updated last month
- ☆486Updated 2 years ago
- Official PyTorch Implementation of EDGE (CVPR 2023)☆524Updated last year
- This repository contains the dataset used in paper "ChoreoMaster: Choreography -Oriented Music Driven Dance Synthesis".☆117Updated 4 years ago
- Load SMPL in blender☆353Updated 2 years ago
- [SIGGRAPH 2022 Journal Track] AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars☆1,100Updated 2 years ago
- Speech Gesture Generation from the Trimodal Context of Text, Audio, and Speaker Identity (SIGGRAPH Asia 2020)☆273Updated 4 years ago
- A motion generation model learned from a single example [SIGGRAPH 2022]☆408Updated last year
- ☆538Updated 5 years ago
- This is the official PyTorch implementation of the CVPR 2020 paper "TransMoMo: Invariance-Driven Unsupervised Video Motion Retargeting".☆405Updated 4 years ago
- [ICLR 2023 Spotlight] EVA3D: Compositional 3D Human Generation from 2D Image Collections☆599Updated 2 years ago
- Freeform Body Motion Generation from Speech☆211Updated 3 years ago
- SMPLpix: Neural Avatars from 3D Human Models☆452Updated last year
- Code for MeshTalk: 3D Face Animation from Speech using Cross-Modality Disentanglement☆396Updated 3 years ago
- PyTorch implementation for our paper Learning Character-Agnostic Motion for Motion Retargeting in 2D, SIGGRAPH 2019☆475Updated 3 years ago
- code for training the models from the paper "Learning Individual Styles of Conversational Gestures"☆390Updated last year
- (CVPR 2023) Pytorch implementation of “T2M-GPT: Generating Human Motion from Textual Descriptions with Discrete Representations”☆726Updated last year
- Extracts human motion in video and save it as bvh mocap file.☆623Updated 5 years ago
- ☆504Updated 3 years ago
- MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model☆956Updated last year
- Official Pytorch implementation of the paper "MotionCLIP: Exposing Human Motion Generation to CLIP Space"☆484Updated last year
- Human Video Generation Paper List☆476Updated last year
- Code repo of the paper "DeepDance: Music-to-Dance Motion Choreography with Adversarial Learning"☆61Updated 4 years ago
- A deep neural network that directly reconstructs the motion of a 3D human skeleton from monocular video [ToG 2020]☆583Updated 3 years ago
- Official PyTorch implementation of the paper "TEMOS: Generating diverse human motions from textual descriptions", ECCV 2022 (Oral)☆437Updated 2 years ago
- Single-view real-time motion capture built up upon Google Mediapipe.☆240Updated last year
- Audio2Motion Official implementation for Audio2Motion: Generating Diverse Gestures from Speech with Conditional Variational Autoencoders.☆144Updated last year
- [CVPR 2023] Executing your Commands via Motion Diffusion in Latent Space, a fast and high-quality motion diffusion model☆691Updated 2 years ago