e-Dylan / gan_faceanimator
GAN deep learning model to use AI generated faces from /gan_facegenerator, turns them into cartoon characters, and animates them.
☆16Updated 7 months ago
Related projects ⓘ
Alternatives and complementary repositories for gan_faceanimator
- This project is based on SadTalker to implement video lip synthesis.☆11Updated 10 months ago
- Uses ChatGPT, TTS, and Stable Diffusion to automatically generate videos☆28Updated last year
- AI-Powered animation tool☆49Updated 3 years ago
- Automatically generate a lip-synced avatar based off of a transcript and audio☆14Updated last year
- Code for the project: "Audio-Driven Video-Synthesis of Personalised Moderations"☆17Updated 9 months ago
- AI Talking Head: create video from plain text or audio file in minutes, support up to 100+ languages and 350+ voice models.☆31Updated last year
- (CVPR 2023)SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation☆26Updated 5 months ago
- One-shot face animation using webcam, capable of running in real time.☆31Updated 5 months ago
- Orchestrating AI for stunning lip-synced videos. Effortless workflow, exceptional results, all in one place.☆64Updated 4 months ago
- A website which shows examples of the various blendshapes that can be animated using ARKit.☆15Updated 2 years ago
- SadTalker gradio_demo.py file with code section that allows you to set the eye blink and pose reference videos for the software to use wh…☆11Updated last year
- canvas-based talking head model using viseme data☆28Updated last year
- A software pipeline for creating realistic videos of people talking, using only images.☆38Updated 2 years ago
- Generating 3D Cartoon Avatars Using 2D Facial Images☆29Updated last year
- A simple face swapper based on insightface inswapper☆15Updated 11 months ago
- SuperGAN aims to develope subject agnostic real-time Face Swaping.☆20Updated 2 years ago
- Wav2Lip-Emotion extends Wav2Lip to modify facial expressions of emotions via L1 reconstruction and pre-trained emotion objectives. We als…☆94Updated 2 years ago
- AvaChat - is a realtime AI chat demo with animated talking heads - it uses Large Language Models (GPT, API2D GPT4, Cluade) as text inputs…☆75Updated 2 weeks ago
- Audio driven video synthesis☆40Updated 2 years ago
- GUI to sync video mouth movements to match audio, utilizing wav2lip-hq. Completed as part of a technical interview.☆11Updated 5 months ago
- Auto-Video maker handling many AI's☆12Updated 7 months ago
- Generate video stories with AI ✨☆28Updated 2 months ago
- The pytorch implementation of our WACV23 paper "Cross-identity Video Motion Retargeting with Joint Transformation and Synthesis".☆145Updated last year
- AI Video Converter Based on ControlNet☆69Updated last year
- A modified version of vid2vid for Speech2Video, Text2Video Paper☆35Updated last year
- Talking head animation☆27Updated 11 months ago
- This repository contains the codes of "A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild", published at ACM Mult…☆33Updated 9 months ago
- Basic framework for training Dreambooth Stable Diffusion v1.5 on Banana's v1.0 serverless GPU platform☆36Updated last year
- Audio-driven facial animation generator with BiLSTM used for transcribing the speech and web interface displaying the avatar and the anim…☆34Updated 2 years ago
- Wav2Lip UHQ Improvement with ControlNet 1.1☆73Updated last year