ZhangLab-DeepNeuroCogLab / EmoEditorLinks
☆15Updated 3 months ago
Alternatives and similar repositories for EmoEditor
Users that are interested in EmoEditor are comparing it to the libraries listed below
Sorting:
- The official repository of "Spectral Motion Alignment for Video Motion Transfer using Diffusion Models".☆31Updated last year
- ☆30Updated 2 years ago
- Official source codes of "TweedieMix: Improving Multi-Concept Fusion for Diffusion-based Image/Video Generation" (ICLR 2025)☆62Updated last year
- [NeurIPS 2024] Motion Consistency Model: Accelerating Video Diffusion with Disentangled Motion-Appearance Distillation☆70Updated last year
- [Neurips 2025 NextVid Workshop Oral✨] Official Implementation of VideoGen-of-Thought: Step-by-step generating multi-shot video with minim…☆57Updated 4 months ago
- [NeurIPS 2024] The official implement of research paper "FreeLong : Training-Free Long Video Generation with SpectralBlend Temporal Atten…☆64Updated 7 months ago
- Reflect-DiT: Inference-Time Scaling for Text-to-Image Diffusion Transformers via In-Context Reflection☆55Updated 5 months ago
- My implement of InstantBooth☆13Updated 2 years ago
- EditReward: A Human-Aligned Reward Model for Instruction-Guided Image Editing [ICLR 2026]☆118Updated this week
- [NeurIPS 2024] EvolveDirector: Approaching Advanced Text-to-Image Generation with Large Vision-Language Models.☆51Updated last year
- [ICLR 2025, AAAI 2026] official implementation of "Diffusion-NPO: Negative Preference Optimization for Better Preference Aligned Generati…☆34Updated last week
- Benchmark dataset and code of MSRVTT-Personalization☆52Updated 2 months ago
- Compositional Inversion for Stable Diffusion Models (AAAI 2024)☆37Updated 11 months ago
- [CVPR`2024, Oral] Attention Calibration for Disentangled Text-to-Image Personalization☆109Updated last year
- official code repo of CVPR 2025 paper PhyT2V: LLM-Guided Iterative Self-Refinement for Physics-Grounded Text-to-Video Generation☆60Updated 6 months ago
- Video-GPT via Next Clip Diffusion.☆44Updated 8 months ago
- [NeurIPS 2024] COVE: Unleashing the Diffusion Feature Correspondence for Consistent Video Editing☆25Updated last year
- This is the official implementation of 2024 CVPR paper "EmoGen: Emotional Image Content Generation with Text-to-Image Diffusion Models".☆92Updated 3 months ago
- Code for FreeTraj, a tuning-free method for trajectory-controllable video generation☆111Updated 4 months ago
- [NeurIPS 2023] Free-Bloom: Zero-Shot Text-to-Video Generator with LLM Director and LDM Animator☆98Updated last year
- FlowZero: Zero-Shot Text-to-Video Synthesis with LLM-Driven Dynamic Scene Syntax☆18Updated 2 years ago
- [ICLR2025] ClassDiffusion: Official impl. of Paper "ClassDiffusion: More Aligned Personalization Tuning with Explicit Class Guidance"☆46Updated 10 months ago
- Code for "VideoRepair: Improving Text-to-Video Generation via Misalignment Evaluation and Localized Refinement"☆52Updated last year
- Diffusion Powers Video Tokenizer for Comprehension and Generation (CVPR 2025)☆86Updated 11 months ago
- Training-free Guidance in Text-to-Video Generation via Multimodal Planning and Structured Noise Initialization☆24Updated 9 months ago
- [ECCV2024] RegionDrag: Fast Region-Based Image Editing with Diffusion Models☆62Updated last year
- [ECCV 2024] Powerful and Flexible: Personalized Text-to-Image Generation via Reinforcement Learning☆51Updated 7 months ago
- [ICCV2025] VEGGIE: Instructional Editing and Reasoning Video Concepts with Grounded Generation☆33Updated 5 months ago
- [arXiv 2024] I4VGen: Image as Free Stepping Stone for Text-to-Video Generation☆24Updated last year
- Videoshop: Localized Semantic Video Editing with Noise-Extrapolated Diffusion Inversion☆45Updated last year