chen-yingjie / Perception-as-ControlLinks
Official implementation of "Perception-as-Control: Fine-grained Controllable Image Animation with 3D-aware Motion Representation"
☆55Updated 2 months ago
Alternatives and similar repositories for Perception-as-Control
Users that are interested in Perception-as-Control are comparing it to the libraries listed below
Sorting:
- ☆83Updated last year
- [AAAI-2025] Official implementation of Image Conductor: Precision Control for Interactive Video Synthesis☆93Updated 11 months ago
- [Arxiv'25] BlobCtrl: A Unified and Flexible Framework for Element-level Image Generation and Editing☆90Updated 3 months ago
- UniCombine: Unified Multi-Conditional Combination with Diffusion Transformer☆91Updated last week
- official repo of paper for "CamI2V: Camera-Controlled Image-to-Video Diffusion Model"☆133Updated 2 months ago
- Official PyTorch implementation - Video Motion Transfer with Diffusion Transformers☆60Updated 2 months ago
- [SIGGRAPH Asia 2024] I2VEdit: First-Frame-Guided Video Editing via Image-to-Video Diffusion Models☆64Updated 6 months ago
- This is the project for 'Any2Caption', Interpreting Any Condition to Caption for Controllable Video Generation☆43Updated 2 months ago
- Concat-ID: Towards Universal Identity-Preserving Video Synthesis☆52Updated last month
- [ARXIV'24] StyleMaster: Stylize Your Video with Artistic Generation and Translation☆120Updated 2 months ago
- [CVPR 2025] Official Implementation of MotionPro: A Precise Motion Controller for Image-to-Video Generation☆102Updated 3 weeks ago
- [ICLR 2025] Trajectory Attention For Fine-grained Video Motion Control☆81Updated last month
- [CVPR'25 Highlight] Official implementation for paper - LeviTor: 3D Trajectory Oriented Image-to-Video Synthesis☆147Updated 2 months ago
- Code for FreeTraj, a tuning-free method for trajectory-controllable video generation☆104Updated 11 months ago
- [Arxiv 2024] Edicho: Consistent Image Editing in the Wild☆117Updated 5 months ago
- [SIGGRAPH 2024] Motion I2V: Consistent and Controllable Image-to-Video Generation with Explicit Motion Modeling☆169Updated 8 months ago
- Official implementation of "Slicedit: Zero-Shot Video Editing With Text-to-Image Diffusion Models Using Spatio-Temporal Slices" (ICML 202…☆56Updated 7 months ago
- Magic Mirror: ID-Preserved Video Generation in Video Diffusion Transformers☆117Updated 5 months ago
- Code for ICLR 2024 paper "Motion Guidance: Diffusion-Based Image Editing with Differentiable Motion Estimators"☆103Updated last year
- [ACM MM24] MotionMaster: Training-free Camera Motion Transfer For Video Generation☆93Updated 8 months ago
- [NeurIPS 2024 Spotlight] The official implement of research paper "MotionBooth: Motion-Aware Customized Text-to-Video Generation"☆133Updated 8 months ago
- ☆32Updated last month
- MagicMotion: Controllable Video Generation with Dense-to-Sparse Trajectory Guidance☆122Updated 2 months ago
- [arXiv'25] AnyCharV: Bootstrap Controllable Character Video Generation with Fine-to-Coarse Guidance☆39Updated 4 months ago
- Official code for VividPose: Advancing Stable Video Diffusion for Realistic Human Image Animation.☆83Updated last year
- The official repository for ECCV2024 paper "RegionDrag: Fast Region-Based Image Editing with Diffusion Models"☆54Updated 8 months ago
- This repository is the official implementation of Human4DiT: 360-degree Human Video Generation with 4D Diffusion Transformer.☆89Updated 8 months ago
- [CVPR'25 - Rating 555] Official PyTorch implementation of Lumos: Learning Visual Generative Priors without Text☆51Updated 3 months ago
- [NeurIPS 2024] COVE: Unleashing the Diffusion Feature Correspondence for Consistent Video Editing☆24Updated 6 months ago
- Implementation code of the paper MIGE: A Unified Framework for Multimodal Instruction-Based Image Generation and Editing☆64Updated 2 weeks ago