think-biq / LLVLinks
CLI tool for recording or replaying Epic Games' live link face capture frames.
☆81Updated last year
Alternatives and similar repositories for LLV
Users that are interested in LLV are comparing it to the libraries listed below
Sorting:
- ☆166Updated 2 years ago
- LiveLink Source for receiving JSON over sockets.☆104Updated 6 years ago
- A UE5 plugin for improving the Metahuman ARKit face tracking.☆94Updated last year
- Pose Asset with visemes for Epic's MetaHuman face skeleton☆51Updated 3 years ago
- ☆95Updated 4 years ago
- Unreal Engine Livelink App + Apple ARkit Blendshapes = Animoji Clone☆39Updated 3 years ago
- Demo project for NNEngine☆11Updated 2 months ago
- mediapipe landmark to mixamo skeleton☆39Updated 2 years ago
- Send mediapipe data to unreal engine with livelink.☆45Updated 2 years ago
- BlendShapeMaker python3.6☆44Updated 4 years ago
- Mirror : a maya facial capture animation toolkit based on mediapipe☆22Updated 3 years ago
- Motion Capture runtime for UE4☆80Updated 4 years ago
- 3D Avatar Lip Synchronization from speech (JALI based face-rigging)☆82Updated 3 years ago
- Implementation of the deformation transfer paper and its application in generating all the ARkit facial blend shapes for any 3D face☆66Updated 3 years ago
- Tools to work with the Pose Camera app☆152Updated last year
- Automatic Facial Retargeting☆62Updated 4 years ago
- A python tool with facial landmark annotation and coefficient finder☆311Updated 4 years ago
- ☆196Updated 4 years ago
- Convert a video file to animated humanIK skeletons for Maya.☆173Updated 2 years ago
- A python script to modify the metahuman rig into a new one for Maya. (deprecated)☆77Updated 3 months ago
- Phiz is a tool that allows you to perform facial motion capture from any device and location.☆133Updated 2 years ago
- addon for blender to import mocap data from tools like easymocap, frankmocap and Vibe☆112Updated 3 years ago
- Speech-Driven Expression Blendshape Based on Single-Layer Self-attention Network (AIWIN 2022)☆77Updated 2 years ago
- ☆42Updated 3 years ago
- ☆47Updated 4 years ago
- 中文到表情☆31Updated 3 years ago
- implementation based on "Audio-Driven Facial Animation by Joint End-to-End Learning of Pose and Emotion"☆162Updated 5 years ago
- This is the source code of our 3DRW 2019 paper☆82Updated 3 years ago
- Web-first SDK that provides real-time ARKit-compatible 52 blend shapes from a camera feed, video or image at 60 FPS using ML models.☆87Updated 2 years ago
- Use the NVIDIA Audio2Face headless server and interact with it through a requests API. Generate animation sequences for Unreal Engine 5, …☆133Updated 4 months ago