ZhangLab-DeepNeuroCogLab / EmoEditorLinks
☆15Updated 3 months ago
Alternatives and similar repositories for EmoEditor
Users that are interested in EmoEditor are comparing it to the libraries listed below
Sorting:
- The official repository of "Spectral Motion Alignment for Video Motion Transfer using Diffusion Models".☆31Updated last year
- [Neurips 2025 NextVid Workshop Oral✨] Official Implementation of VideoGen-of-Thought: Step-by-step generating multi-shot video with minim…☆57Updated 4 months ago
- [ICLR 2025, AAAI 2026] official implementation of "Diffusion-NPO: Negative Preference Optimization for Better Preference Aligned Generati…☆34Updated last week
- [NeurIPS 2024] Motion Consistency Model: Accelerating Video Diffusion with Disentangled Motion-Appearance Distillation☆70Updated last year
- Reflect-DiT: Inference-Time Scaling for Text-to-Image Diffusion Transformers via In-Context Reflection☆55Updated 5 months ago
- ☆30Updated 2 years ago
- My implement of InstantBooth☆13Updated 2 years ago
- Code for "VideoRepair: Improving Text-to-Video Generation via Misalignment Evaluation and Localized Refinement"☆52Updated last year
- This is the official implementation of 2024 CVPR paper "EmoGen: Emotional Image Content Generation with Text-to-Image Diffusion Models".☆92Updated 3 months ago
- [NeurIPS 2024] The official implement of research paper "FreeLong : Training-Free Long Video Generation with SpectralBlend Temporal Atten…☆64Updated 7 months ago
- [CVPR`2024, Oral] Attention Calibration for Disentangled Text-to-Image Personalization☆109Updated last year
- Official source codes of "TweedieMix: Improving Multi-Concept Fusion for Diffusion-based Image/Video Generation" (ICLR 2025)☆62Updated last year
- [NeurIPS'25 Spotlight] MJ-VIDEO: Fine-Grained Benchmarking and Rewarding Video Preferences in Video Generation☆19Updated 11 months ago
- Training-free Guidance in Text-to-Video Generation via Multimodal Planning and Structured Noise Initialization☆24Updated 9 months ago
- Compositional Inversion for Stable Diffusion Models (AAAI 2024)☆37Updated 11 months ago
- [ECCV2024] RegionDrag: Fast Region-Based Image Editing with Diffusion Models☆62Updated last year
- Video-GPT via Next Clip Diffusion.☆44Updated 8 months ago
- [ICLR2025] ClassDiffusion: Official impl. of Paper "ClassDiffusion: More Aligned Personalization Tuning with Explicit Class Guidance"☆46Updated 10 months ago
- [ICCV2025] VEGGIE: Instructional Editing and Reasoning Video Concepts with Grounded Generation☆33Updated 5 months ago
- [NeurIPS 2024] EvolveDirector: Approaching Advanced Text-to-Image Generation with Large Vision-Language Models.☆51Updated last year
- Benchmark dataset and code of MSRVTT-Personalization☆52Updated 2 months ago
- Official implementation for "LOVECon: Text-driven Training-free Long Video Editing with ControlNet"☆43Updated 2 years ago
- [NeurIPS 2024] COVE: Unleashing the Diffusion Feature Correspondence for Consistent Video Editing☆25Updated last year
- [NeurIPS 2023] Free-Bloom: Zero-Shot Text-to-Video Generator with LLM Director and LDM Animator☆98Updated last year
- Diffusion Powers Video Tokenizer for Comprehension and Generation (CVPR 2025)☆86Updated 11 months ago
- [ICCV 2025] Official implementation of "Anchor Token Matching: Implicit Structure Locking for Training-free AR Image Editing"☆28Updated 9 months ago
- This is the official implementation for DragVideo☆55Updated last year
- ☆47Updated 9 months ago
- Official implementation of the paper "MotionCrafter: One-Shot Motion Customization of Diffusion Models"☆28Updated 2 years ago
- EditReward: A Human-Aligned Reward Model for Instruction-Guided Image Editing [ICLR 2026]☆118Updated this week