BenetManzanaresSalor / Video2AnimLinks
A program that makes use of OpenPose pose detection to transform a video into a 2D animation file in Unity's .anim format. Also, it process the results to smooth the animation and is able to generate animations of different people from one video.
☆36Updated 2 years ago
Alternatives and similar repositories for Video2Anim
Users that are interested in Video2Anim are comparing it to the libraries listed below
Sorting:
- mediapipe landmark to mixamo skeleton☆36Updated 2 years ago
- AI based all-in-one character generator Blender plug-in. This project contains unofficial updates for the CEB_ECON Blender add-on.☆89Updated 2 years ago
- This tool will help you build a 3D character rig without building it yourself from scratch. It will save you hours if not days of rigging…☆26Updated 2 years ago
- Web-first SDK that provides real-time ARKit-compatible 52 blend shapes from a camera feed, video or image at 60 FPS using ML models.☆84Updated 2 years ago
- Mirror : a maya facial capture animation toolkit based on mediapipe☆22Updated 2 years ago
- Using Mediapipe to create an OBJ of a face from a source image☆22Updated last year
- Fork of Controlnet for 2 input channels☆59Updated 2 years ago
- ☆105Updated 2 years ago
- ☆54Updated 4 years ago
- CV-engineering-releated papers and codes.☆12Updated 2 years ago
- Audio2Motion Official implementation for Audio2Motion: Generating Diverse Gestures from Speech with Conditional Variational Autoencoders.☆135Updated last year
- Implementation of the deformation transfer paper and its application in generating all the ARkit facial blend shapes for any 3D face☆66Updated 3 years ago
- ☆113Updated last year
- To automate the rigging process of 2D skeletal animation, we finetuned a human pose estimation model to work for sketches as well. The da…☆15Updated 9 months ago
- addon for blender to import mocap data from tools like easymocap, frankmocap and Vibe☆109Updated 3 years ago
- Blender add-on to implement VOCA neural network.☆59Updated 2 years ago
- This is the official implementation of CT2Hair High-fidelity 3D Hair Modeling Using Computed Tomography.☆206Updated last year
- Official PyTorch Implementation of 'High-Fidelity Neural Human Motion Transfer from Monocular Video'☆86Updated 3 years ago
- 3D Avatar Lip Synchronization from speech (JALI based face-rigging)☆82Updated 3 years ago
- The pytorch implementation of our WACV23 paper "Cross-identity Video Motion Retargeting with Joint Transformation and Synthesis".☆148Updated last year
- [ICCV23] AvatarCraft: Transforming Text into Neural Human Avatars with Parameterized Shape and Pose Control☆186Updated last year
- Machine Learning project aimed at converting images into .obj 3D models by representing them as Blender hair-type particle systems.☆25Updated 4 years ago
- Proof of concept for control landmarks in diffusion models!☆86Updated last year
- Official code release for ICCV2023 paper AG3D: Learning to Generate 3D Avatars from 2D Image Collections☆262Updated last year
- Automatic Facial Retargeting☆62Updated 4 years ago
- A project where motion capture data is created based on the AI solution MediaPipe Holistic and applied to a 3D character in Blender☆71Updated 3 years ago
- convert VMCProcotol to MOPProcotol☆58Updated 4 years ago
- Speech-Driven Expression Blendshape Based on Single-Layer Self-attention Network (AIWIN 2022)☆76Updated 2 years ago
- A 3D-aware generative adversarial network (GAN) that synthesizes images of full-body humans with consistent appearances under different v…☆237Updated last year
- This is the source code of our 3DRW 2019 paper☆82Updated 2 years ago