showlab / ShowAnythingLinks
β82Updated 2 years ago
Alternatives and similar repositories for ShowAnything
Users that are interested in ShowAnything are comparing it to the libraries listed below
Sorting:
- [IJCV 2025] Paragraph-to-Image Generation with Information-Enriched Diffusion Modelβ106Updated 9 months ago
- ποΈ Official implementation of "Gen4Gen: Generative Data Pipeline for Generative Multi-Concept Composition"β109Updated last month
- T2VScore: Towards A Better Metric for Text-to-Video Generationβ80Updated last year
- [IJCV'24] AutoStory: Generating Diverse Storytelling Images with Minimal Human Effortβ150Updated last year
- [ICLR 2025] HQ-Edit: A High-Quality and High-Coverage Dataset for General Image Editingβ112Updated last year
- [NeurIPS 2023] Customize spatial layouts for conditional image synthesis models, e.g., ControlNet, using GPTβ136Updated last year
- Ground-A-Video: Zero-shot Grounded Video Editing using Text-to-image Diffusion Models (ICLR 2024)β140Updated last year
- InteractiveVideo: User-Centric Controllable Video Generation with Synergistic Multimodal Instructionsβ132Updated last year
- β180Updated 2 months ago
- [WACV 2025] Follow-Your-Handle: This repo is the official implementation of "MagicStick: Controllable Video Editing via Control Handle Trβ¦β97Updated 2 years ago
- Implementation of InstructEditβ76Updated 2 years ago
- Reuse and Diffuse: Iterative Denoising for Text-to-Video Generationβ38Updated 2 years ago
- [IEEE TVCG 2024] Customized Video Generation Using Textual and Structural Guidanceβ195Updated last year
- A simple script that reads a directory of videos, grabs a random frame, and automatically discovers a prompt for itβ143Updated 2 years ago
- [CVPR2024] The official implementation of paper Relation Rectification in Diffusion Modelβ48Updated last year
- [CVPR 2025] Official PyTorch implementation of StoryGPT-Vβ40Updated 7 months ago
- [TMLR] Official PyTorch implementation of "Ξ»-ECLIPSE: Multi-Concept Personalized Text-to-Image Diffusion Models by Leveraging CLIP Latentβ¦β52Updated last year
- [ICLR 2024] LLM-grounded Video Diffusion Models (LVD): official implementation for the LVD paperβ164Updated last year
- ACM MM'23 (oral), SUR-adapter for pre-trained diffusion models can acquire the powerful semantic understanding and reasoning capabilitiesβ¦β121Updated 4 months ago
- Retrieval-Augmented Video Generation for Telling a Storyβ259Updated last year
- Code Release for the paper "Make-A-Story: Visual Memory Conditioned Consistent Story Generation" in CVPR 2023β43Updated 2 years ago
- [NeurIPS 2024] EvolveDirector: Approaching Advanced Text-to-Image Generation with Large Vision-Language Models.β51Updated last year
- Official GitHub repository for the Text-Guided Video Editing (TGVE) competition of LOVEU Workshop @ CVPR'23.β78Updated 2 years ago
- [ACM Multimedia 2025 Datasets Track] EditWorld: Simulating World Dynamics for Instruction-Following Image Editingβ138Updated 5 months ago
- A Diffusion training toolbox based on diffusers and existing SOTA methods, including Dreambooth, Texual Inversion, LoRA, Custom Diffusionβ¦β82Updated last year
- Image Editing Anythingβ116Updated 2 years ago
- Zero-Shot Video Editing Using Off-The-Shelf Image Diffusion Modelsβ356Updated 2 years ago
- The official implementation for "Gen-L-Video: Multi-Text to Long Video Generation via Temporal Co-Denoising".β305Updated 3 months ago
- β114Updated last year
- Official implementation of MARS: Mixture of Auto-Regressive Models for Fine-grained Text-to-image Synthesisβ86Updated last year