google-research / mint
Multi-modal Content Creation Model Training Infrastructure including the FACT model (AI Choreographer) implementation.
☆526Updated 3 years ago
Alternatives and similar repositories for mint:
Users that are interested in mint are comparing it to the libraries listed below
- API to support AIST++ Dataset: https://google.github.io/aistplusplus_dataset☆365Updated last year
- Code for CVPR 2022 paper "Bailando: 3D dance generation via Actor-Critic GPT with Choreographic Memory"☆406Updated last year
- ☆481Updated last year
- Official PyTorch Implementation of EDGE (CVPR 2023)☆478Updated last year
- Load SMPL in blender☆311Updated last year
- An end-to-end library for automatic character rigging, skinning, and blend shapes generation, as well as a visualization tool [SIGGRAPH 2…☆665Updated 3 years ago
- This repository contains the dataset used in paper "ChoreoMaster: Choreography -Oriented Music Driven Dance Synthesis".☆116Updated 3 years ago
- Official PyTorch implementation of the paper "TEACH: Temporal Action Compositions for 3D Humans" [3DV 2022]☆389Updated last month
- This is the official PyTorch implementation of the CVPR 2020 paper "TransMoMo: Invariance-Driven Unsupervised Video Motion Retargeting".☆399Updated 3 years ago
- Code for MeshTalk: 3D Face Animation from Speech using Cross-Modality Disentanglement☆383Updated 2 years ago
- Speech Gesture Generation from the Trimodal Context of Text, Audio, and Speaker Identity (SIGGRAPH Asia 2020)☆256Updated 3 years ago
- Tools to load, process and visualize motion capture data☆612Updated 2 years ago
- (CVPR 2023) Pytorch implementation of “T2M-GPT: Generating Human Motion from Textual Descriptions with Discrete Representations”☆653Updated 5 months ago
- [CVPR 2023] Executing your Commands via Motion Diffusion in Latent Space, a fast and high-quality motion diffusion model☆622Updated last year
- Official implementation for "Generating Diverse and Natural 3D Human Motions from Texts (CVPR2022)."☆536Updated 6 months ago
- Freeform Body Motion Generation from Speech☆201Updated 2 years ago
- An Blender addon uses ROMP to extract human's 3D poses from image, video or webcam and drive your own 3D character.☆253Updated last year
- A motion generation model learned from a single example [SIGGRAPH 2022]☆399Updated 8 months ago
- code for training the models from the paper "Learning Individual Styles of Conversational Gestures"☆380Updated last year
- Extracts human motion in video and save it as bvh mocap file.☆591Updated 4 years ago
- ExPose - EXpressive POse and Shape rEgression☆627Updated 2 years ago
- visualization code for 3D human body annotation by EFT (Exemplar Fine-tuning)☆386Updated 3 years ago
- Official Pytorch implementation of the paper "MotionCLIP: Exposing Human Motion Generation to CLIP Space"☆440Updated last year
- SMPLpix: Neural Avatars from 3D Human Models☆438Updated 7 months ago
- MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model☆904Updated 7 months ago
- VPoser: Variational Human Pose Prior☆847Updated 2 years ago
- PyTorch implementation for our paper Learning Character-Agnostic Motion for Motion Retargeting in 2D, SIGGRAPH 2019☆455Updated 2 years ago
- Official PyTorch implementation of the paper "TEMOS: Generating diverse human motions from textual descriptions", ECCV 2022 (Oral)☆391Updated last year
- The official implementation of the paper "Human Motion Diffusion as a Generative Prior"☆460Updated last month
- ☆471Updated 2 years ago