ubc-vision / Make-A-StoryLinks
Code Release for the paper "Make-A-Story: Visual Memory Conditioned Consistent Story Generation" in CVPR 2023
☆42Updated 2 years ago
Alternatives and similar repositories for Make-A-Story
Users that are interested in Make-A-Story are comparing it to the libraries listed below
Sorting:
- T2VScore: Towards A Better Metric for Text-to-Video Generation☆80Updated last year
- Code for "DreamEdit: Subject-driven Image Editing" (TMLR2023)☆108Updated last year
- Implementation of InstructEdit☆75Updated last year
- [ICCV 2023 Oral, Best Paper Finalist] ITI-GEN: Inclusive Text-to-Image Generation☆67Updated last year
- 🏞️ Official implementation of "Gen4Gen: Generative Data Pipeline for Generative Multi-Concept Composition"☆108Updated last year
- Official implementation for "LOVECon: Text-driven Training-free Long Video Editing with ControlNet"☆42Updated last year
- Directed Diffusion: Direct Control of Object Placement through Attention Guidance (AAAI2024)☆79Updated last year
- Official implementation of the paper "MotionCrafter: One-Shot Motion Customization of Diffusion Models"☆28Updated last year
- Official PyTorch Implementation for Shape-Guided Diffusion with Inside-Outside Attention, WACV 2024☆37Updated 2 years ago
- [NeurIPS 2024] Motion Consistency Model: Accelerating Video Diffusion with Disentangled Motion-Appearance Distillation☆67Updated 9 months ago
- Official implementation of MARS: Mixture of Auto-Regressive Models for Fine-grained Text-to-image Synthesis☆85Updated last year
- DiffBlender: Scalable and Composable Multimodal Text-to-Image Diffusion Models☆46Updated last year
- Reuse and Diffuse: Iterative Denoising for Text-to-Video Generation☆38Updated last year
- [CVPR`2024, Oral] Attention Calibration for Disentangled Text-to-Image Personalization☆104Updated last year
- Official GitHub repository for the Text-Guided Video Editing (TGVE) competition of LOVEU Workshop @ CVPR'23.☆76Updated last year
- ☆64Updated 2 years ago
- [NeurIPS 2024] RealCompo: Balancing Realism and Compositionality Improves Text-to-Image Diffusion Models☆117Updated 9 months ago
- [ICLR 2025] HQ-Edit: A High-Quality and High-Coverage Dataset for General Image Editing☆103Updated last year
- Official implementation of the paper "Harnessing the Spatial-Temporal Attention of Diffusion Models for High-Fidelity Text-to-Image Synth…☆92Updated last year
- ☆29Updated last year
- [NeurIPS 2023] Free-Bloom: Zero-Shot Text-to-Video Generator with LLM Director and LDM Animator☆94Updated last year
- [ICLR 24] MaGIC: Multi-modality Guided Image Completion☆51Updated last year
- [TMLR] Official PyTorch implementation of "λ-ECLIPSE: Multi-Concept Personalized Text-to-Image Diffusion Models by Leveraging CLIP Latent…☆51Updated 8 months ago
- ☆26Updated 8 months ago
- [CVPR2024] CapHuman: Capture Your Moments in Parallel Universes☆97Updated 9 months ago
- [IJCV 2025] Paragraph-to-Image Generation with Information-Enriched Diffusion Model☆105Updated 4 months ago
- code for paper "Compositional Text-to-Image Synthesis with Attention Map Control of Diffusion Models"☆43Updated last year
- ☆36Updated 2 years ago
- ☆30Updated last year
- Code for ACM MM'23 paper: LayoutLLM-T2I: Eliciting Layout Guidance from LLM for Text-to-Image Generation☆48Updated last year