suchipi / arkit-face-blendshapesLinks
A website which shows examples of the various blendshapes that can be animated using ARKit.
☆20Updated 4 years ago
Alternatives and similar repositories for arkit-face-blendshapes
Users that are interested in arkit-face-blendshapes are comparing it to the libraries listed below
Sorting:
- Web-first SDK that provides real-time ARKit-compatible 52 blend shapes from a camera feed, video or image at 60 FPS using ML models.☆87Updated 3 years ago
- mediapipe landmark to mixamo skeleton☆41Updated 2 years ago
- Realtime VRM Humanoid Avatar Animation using Human Library and ThreeJS☆96Updated 2 years ago
- CLI tool for recording or replaying Epic Games' live link face capture frames.☆83Updated 2 years ago
- ☆45Updated 3 years ago
- Phiz is a tool that allows you to perform facial motion capture from any device and location.☆139Updated 2 years ago
- 3D Avatar Lip Synchronization from speech (JALI based face-rigging)☆82Updated 3 years ago
- 中文到表情☆31Updated 3 years ago
- ☆12Updated 3 years ago
- ☆96Updated 4 years ago
- Generating 3D Cartoon Avatars Using 2D Facial Images☆33Updated 2 years ago
- Audio2Face Avatar with Riva SDK functionality☆75Updated 3 years ago
- Unreal Engine Livelink App + Apple ARkit Blendshapes = Animoji Clone☆41Updated 3 years ago
- Audio-driven facial animation generator with BiLSTM used for transcribing the speech and web interface displaying the avatar and the anim…☆35Updated 3 years ago
- Speech to Facial Animation using GANs☆40Updated 4 years ago
- Blender add-on to implement VOCA neural network.☆61Updated 3 years ago
- A project where motion capture data is created based on the AI solution MediaPipe Holistic and applied to a 3D character in Blender☆76Updated 4 years ago
- Speech-Driven Expression Blendshape Based on Single-Layer Self-attention Network (AIWIN 2022)☆79Updated 3 years ago
- Aim to accelerate the image-animation-model inference through the inference frameworks such as onnx、tensorrt and openvino.☆76Updated last year
- ☆27Updated 2 years ago
- Pytorch reimplementation of audio driven face mesh or blendshape models, including Audio2Mesh, VOCA, etc☆17Updated last year
- Wav2Lip-Emotion extends Wav2Lip to modify facial expressions of emotions via L1 reconstruction and pre-trained emotion objectives. We als…☆97Updated 3 years ago
- Realtime Face/Pose/Hand Motion 3D Model Visualization and 2D Overlay for using Human Library and BabylonJS☆115Updated 2 years ago
- ☆118Updated 2 years ago
- Audio-Visual Lip Synthesis via Intermediate Landmark Representation☆18Updated 2 years ago
- Implementation of the deformation transfer paper and its application in generating all the ARkit facial blend shapes for any 3D face☆66Updated 4 years ago
- Single-view real-time motion capture built up upon Google Mediapipe.☆243Updated last year
- Freeform Body Motion Generation from Speech☆211Updated 3 years ago
- Mirror : a maya facial capture animation toolkit based on mediapipe☆22Updated 3 years ago
- Mocap Dataset of “Write-a-speaker: Text-based Emotional and Rhythmic Talking-head Generation”☆161Updated 4 years ago