ntaraujo / gseLinks
Green Screen Emulator: A.I. changing the background of your video/image
☆15Updated 2 years ago
Alternatives and similar repositories for gse
Users that are interested in gse are comparing it to the libraries listed below
Sorting:
- EbSynth is hard to use... Lot's of turning videos into image sequences, resizing style images to fit the original frames, renaming the st…☆40Updated last year
- Personal GPEN scripts within the GPEN-Windows stand-alone package.☆21Updated 3 years ago
- FILM: Frame Interpolation for Large Motion, In arXiv 2022.☆29Updated 3 years ago
- Video restoration Processing Pipeline☆30Updated last year
- GradioUI for TortoiseTTS voice generation☆34Updated last year
- ☆93Updated 2 years ago
- Automatic Image Morphing☆71Updated 3 months ago
- Source for loopifi.com. Find and make smooth loops from videos! Create and download smoothly looping gifs and webms from youtube and othe…☆31Updated 2 years ago
- Pytorch Tool that uses Deeplabv3 and MoviePy to produce masks for removing backgrounds from images (but keeping people). Green screening …☆73Updated 2 years ago
- This is a wrapper of rem_bg for auto1111's stable diffusion gui. It can do clothing segmentation, background removal, and background mask…☆79Updated last year
- Windows compatible code for the paper "Jukebox: A Generative Model for Music"☆13Updated 2 years ago
- This is a modified version of NVIDIA's TalkNet. It is a controllable network that can be used for both CPU and GPU inference.☆45Updated last year
- This is a HeadSwap project not only face☆34Updated 2 years ago
- Qt based Linux/Windows GUI for Stable Diffusion☆35Updated 2 years ago
- An easy way to view the images and metadata generated by Stable Diffusion's Automatic1111 WebUI☆40Updated last year
- A GUI for text2img diffusion, as a visual alternative to CLI and Jupyter Notebooks.☆29Updated 2 years ago
- Official code for CVPR2022 paper: Depth-Aware Generative Adversarial Network for Talking Head Video Generation☆23Updated 2 years ago
- Generate morph sequences with Stable Diffusion. Interpolate between two or more prompts and create an image at each step.☆118Updated last year
- AI video temporal coherence Lab☆56Updated 2 years ago
- ☆13Updated 3 years ago
- Official PyTorch repo for JoJoGAN: One Shot Face Stylization with VIDEO and TRAINING☆17Updated 3 years ago
- Sensing Depth from 2D Images and Inpainting Background behind the Foreground objects to create 3D Photos with Parallax Animation.☆17Updated 3 years ago
- resources for creating Ininite zoom video using Stable Diffiusion, you can use multiple prompts and it is easy to use.☆90Updated last year
- Towards Flexible Blind JPEG Artifacts Removal (FBCNN, ICCV 2021)☆25Updated 3 years ago
- Home of the Chunkmogrify project☆128Updated 3 years ago
- Code for Pose-Controllable Talking Face Generation by Implicitly Modularized Audio-Visual Representation (CVPR 2021)☆27Updated 3 years ago
- A colab notebook for video super resolution using GFPGAN☆36Updated 2 years ago
- stylegan3_blending☆39Updated 3 years ago
- This is an implementation of iperov's DeepFaceLab and DeepFaceLive in Stable Diffusion Web UI 1111 by AUTOMATIC1111.☆111Updated 8 months ago
- Video frame interpolation using RIFE☆32Updated last year