ant-research / CoDeFLinks
[CVPR'24 Highlight] Official PyTorch implementation of CoDeF: Content Deformation Fields for Temporally Consistent Video Processing
☆4,867Updated last year
Alternatives and similar repositories for CoDeF
Users that are interested in CoDeF are comparing it to the libraries listed below
Sorting:
- MagicEdit: High-Fidelity Temporally Coherent Video Editing☆1,808Updated 2 years ago
- [SIGGRAPH Asia 2023] Rerender A Video: Zero-Shot Text-Guided Video-to-Video Translation☆3,004Updated last year
- VideoCrafter2: Overcoming Data Limitations for High-Quality Video Diffusion Models☆5,013Updated last year
- Unofficial Implementation of DragGAN - "Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold" (DragGAN 全功…☆4,977Updated 2 years ago
- Official Pytorch Implementation for "TokenFlow: Consistent Diffusion Features for Consistent Video Editing" presenting "TokenFlow" (ICLR …☆1,697Updated 11 months ago
- Implementation of DragGAN: Interactive Point-based Manipulation on the Generative Image Manifold☆2,149Updated 2 years ago
- [ICCV 2023] StableVideo: Text-driven Consistency-aware Diffusion Video Editing☆1,445Updated 2 years ago
- InternGPT (iGPT) is an open source demo platform where you can easily showcase your AI models. Now it supports DragGAN, ChatGPT, ImageBin…☆3,223Updated last year
- Official implementations for paper: Anydoor: zero-shot object-level image customization☆4,204Updated last year
- Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference☆4,598Updated last year
- Edit anything in images powered by segment-anything, ControlNet, StableDiffusion, etc. (ACM MM)☆3,424Updated 10 months ago
- Official implementation of DreaMoving☆1,801Updated last year
- Official repo for VGen: a holistic video generation ecosystem for video generation building on diffusion models☆3,148Updated 11 months ago
- Track-Anything is a flexible and interactive tool for video object tracking and segmentation, based on Segment Anything, XMem, and E2FGVI…☆6,908Updated 3 weeks ago
- [ICCV 2023 Oral] "FateZero: Fusing Attentions for Zero-shot Text-based Video Editing"☆1,157Updated 2 years ago
- [ICCV 2023] ProPainter: Improving Propagation and Transformer for Video Inpainting☆6,440Updated 10 months ago
- ☆2,458Updated last year
- Code Repository for CVPR 2023 Paper "PanoHead: Geometry-Aware 3D Full-Head Synthesis in 360 degree"☆1,962Updated last year
- FaceChain is a deep-learning toolchain for generating your Digital-Twin.☆9,500Updated 6 months ago
- Let us democratise high-resolution generation! (CVPR 2024)☆2,036Updated 2 months ago
- T2I-Adapter☆3,777Updated last year
- [ICCV 2023 Oral] Text-to-Image Diffusion Models are Zero-Shot Video Generators☆4,232Updated 2 years ago
- Code repository for Zero123++: a Single Image to Consistent Multi-view Diffusion Base Model.☆1,993Updated last year
- [ICLR 2024 Oral] Generative Gaussian Splatting for Efficient 3D Content Creation☆4,266Updated 2 years ago
- Character Animation (AnimateAnyone, Face Reenactment)☆3,473Updated last year
- [ECCV 2024, Oral] DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors☆2,978Updated last year
- Consistency Distilled Diff VAE☆2,204Updated 2 years ago
- [ICCV 2023] Tune-A-Video: One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation☆4,370Updated 2 years ago
- Open-Set Grounded Text-to-Image Generation☆2,187Updated last year
- [ICML 2024] Mastering Text-to-Image Diffusion: Recaptioning, Planning, and Generating with Multimodal LLMs (RPG)☆1,839Updated 11 months ago