ant-research / edichoLinks
[ICCV 2025] Edicho: Consistent Image Editing in the Wild
☆124Updated 2 months ago
Alternatives and similar repositories for edicho
Users that are interested in edicho are comparing it to the libraries listed below
Sorting:
- [SIGGRAPH ASIA'25] BlobCtrl: Taming Controllable Blob for Element-level Image Editing☆25Updated last month
- Concat-ID: Towards Universal Identity-Preserving Video Synthesis☆65Updated 8 months ago
- Official code of "Edit Transfer: Learning Image Editing via Vision In-Context Relations"☆87Updated 7 months ago
- [AAAI 2025] Follow-Your-Canvas: This repo is the official implementation of "Follow-Your-Canvas: Higher-Resolution Video Outpainting with…☆156Updated 4 months ago
- [ICCV 2025] Code for FreeScale, a tuning-free method for higher-resolution visual generation☆146Updated 3 months ago
- [CVPR'25] StyleMaster: Stylize Your Video with Artistic Generation and Translation☆164Updated last month
- [ICCV 2025] MagicMirror: ID-Preserved Video Generation in Video Diffusion Transformers☆127Updated 6 months ago
- The official implementation of the paper titled "StableV2V: Stablizing Shape Consistency in Video-to-Video Editing".☆165Updated last month
- [AAAI'25] Official implementation of Image Conductor: Precision Control for Interactive Video Synthesis☆100Updated last year
- Conceptrol: Concept Control of Zero-shot Personalized Image Generation☆44Updated 9 months ago
- [ECCV 2024] Noise Calibration: Plug-and-play Content-Preserving Video Enhancement using Pre-trained Video Diffusion Models☆88Updated last year
- [SIGGRAPH 2024] Motion I2V: Consistent and Controllable Image-to-Video Generation with Explicit Motion Modeling☆186Updated last year
- [CVPR 2025] Official Implementation of MotionPro: A Precise Motion Controller for Image-to-Video Generation☆141Updated 2 weeks ago
- Edit-R1: Reinforce Image Editing with Diffusion Negative-Aware Finetuning and MLLM Implicit Feedback☆204Updated 3 weeks ago
- [CVPR2024] Official code for Drag Your Noise: Interactive Point-based Editing via Diffusion Semantic Propagation☆87Updated last year
- This respository contains the code for the NeurIPS 2024 paper SF-V: Single Forward Video Generation Model.☆99Updated last year
- [ACM MM24] MotionMaster: Training-free Camera Motion Transfer For Video Generation☆98Updated last year
- [ICCV 2025] FreeFlux: Understanding and Exploiting Layer-Specific Roles in RoPE-Based MMDiT for Versatile Image Editing☆70Updated 4 months ago
- ☆104Updated last year
- MasterWeaver: Taming Editability and Face Identity for Personalized Text-to-Image Generation (ECCV 2024)☆135Updated last year
- [ECCV 2024] AnyControl, a multi-control image synthesis model that supports any combination of user provided control signals. 一个支持用户自由输入控…☆128Updated last year
- UniCombine: Unified Multi-Conditional Combination with Diffusion Transformer☆118Updated 6 months ago
- [[NeurIPS 2025] UltraVideo: High-Quality UHD Video Dataset with Comprehensive Captions☆78Updated 5 months ago
- Maximize the Resolution Potential of Pre-trained Rectified Flow Transformers☆64Updated last year
- Subjects200K dataset☆129Updated 11 months ago
- Implementation code of the paper MIGE: A Unified Framework for Multimodal Instruction-Based Image Generation and Editing☆71Updated 6 months ago
- [SIGGRAPH 2025] Official implementation of 'Motion Inversion For Video Customization'☆152Updated last year
- Official implementation of ICCV 2025 paper - CharaConsist: Fine-Grained Consistent Character Generation☆139Updated 5 months ago
- Implementation Code for Omni-Effects☆163Updated last month
- Pytorch Implementation of "SSR-Encoder: Encoding Selective Subject Representation for Subject-Driven Generation"(CVPR 2024)☆128Updated last year