TencentARC / SEED-StoryLinks
SEED-Story: Multimodal Long Story Generation with Large Language Model
☆874Updated last year
Alternatives and similar repositories for SEED-Story
Users that are interested in SEED-Story are comparing it to the libraries listed below
Sorting:
- StoryMaker: Towards consistent characters in text-to-image generation☆713Updated 11 months ago
- [ECCV2024] This is an official inference code of the paper "Glyph-ByT5: A Customized Text Encoder for Accurate Visual Text Rendering" and…☆615Updated 2 months ago
- AutoStudio: Crafting Consistent Subjects in Multi-turn Interactive Image Generation☆447Updated 7 months ago
- [ECCV 2024] OMG: Occlusion-friendly Personalized Multi-concept Generation In Diffusion Models☆697Updated last year
- [CVPR 2024] FRESCO: Spatial-Temporal Correspondence for Zero-Shot Video Translation☆779Updated last year
- [under review] The official implementation of paper "BrushEdit: All-In-One Image Inpainting and Editing"☆582Updated 2 months ago
- [ECCV 2024] MOFA-Video: Controllable Image Animation via Generative Motion Field Adaptions in Frozen Image-to-Video Diffusion Model.☆755Updated 11 months ago
- [AAAI 2025] Follow-Your-Click: This repo is the official implementation of "Follow-Your-Click: Open-domain Regional Image Animation via S…☆907Updated 2 months ago
- A Training-free Iterative Framework for Long Story Visualization☆929Updated 10 months ago
- Official Pytorch implementation of StreamV2V.☆520Updated 9 months ago
- Multimodal Models in Real World☆548Updated 8 months ago
- CogView4, CogView3-Plus and CogView3(ECCV 2024)☆1,091Updated 7 months ago
- CVPR'24, Official Codebase of our Paper: "Let's Think Outside the Box: Exploring Leap-of-Thought in Large Language Models with Creative H…☆321Updated last year
- [CVPR 2025] StreamingT2V: Consistent, Dynamic, and Extendable Long Video Generation from Text☆1,611Updated 7 months ago
- 🔥ICLR 2025 (Spotlight) One-Prompt-One-Story: Free-Lunch Consistent Text-to-Image Generation Using a Single Prompt☆306Updated last month
- SCEPTER is an open-source framework used for training, fine-tuning, and inference with generative models.☆545Updated 7 months ago
- Code and data for "AnyV2V: A Tuning-Free Framework For Any Video-to-Video Editing Tasks" [TMLR 2024]☆634Updated last year
- VideoGen-Eval: Agent-based System for Video Generation Evaluation☆250Updated 7 months ago
- Implementation of [CVPR 2025] "DiffSensei: Bridging Multi-Modal LLMs and Diffusion Models for Customized Manga Generation"☆877Updated 9 months ago
- [CVPR2024] Make Your Dream A Vlog☆428Updated 6 months ago
- Official Repo for the Paper: CHATANYTHING: FACETIME CHAT WITH LLM-ENHANCED PERSONAS☆382Updated last year
- Pytorch Implementation of: "Stable-Hair: Real-World Hair Transfer via Diffusion Model" (AAAI 2025)☆516Updated 8 months ago
- [ICML 2024] MagicPose(also known as MagicDance): Realistic Human Poses and Facial Expressions Retargeting with Identity-aware Diffusion☆771Updated last year
- [CVPR 2024] PIA, your Personalized Image Animator. Animate your images by text prompt, combing with Dreambooth, achieving stunning videos…☆974Updated last year
- ☆184Updated 3 months ago
- [IJCV 2024] LaVie: High-Quality Video Generation with Cascaded Latent Diffusion Models☆939Updated last year
- [AAAI 2025] StoryWeaver: A Unified World Model for Knowledge-Enhanced Story Character Customization☆222Updated 7 months ago
- ☆634Updated 3 months ago
- Controllable video and image Generation, SVD, Animate Anyone, ControlNet, ControlNeXt, LoRA☆1,621Updated last year
- DesignEdit: Unify Spatial-Aware Image Editing via Training-free Inpainting with a Multi-Layered Latent Diffusion Framework☆356Updated 11 months ago