yuqwu / Replace-AnythingLinks
A simple web application that lets you replace any part of an image with an image generated based on your description.
☆117Updated 2 years ago
Alternatives and similar repositories for Replace-Anything
Users that are interested in Replace-Anything are comparing it to the libraries listed below
Sorting:
- Website source code for our ACM MM'23 paper "Hierarchical Masked 3D Diffusion Model for Video Outpainting".☆40Updated last year
- A matting method that combines dynamic 2D foreground layers and a 3D background model.☆145Updated 2 years ago
- 2nd place solution for the Generative Interior Design 2024 competition☆125Updated 10 months ago
- Implementation of DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing☆226Updated 2 years ago
- ☆88Updated last year
- Stable Fashion: A prompt based virtual try on repository☆88Updated 2 years ago
- Implementation of "SCEdit: Efficient and Controllable Image Diffusion Generation via Skip Connection Editing"☆85Updated last year
- An open source, layer-based web interface for Collage Diffusion - use a familiar Photoshop-like interface and let the AI harmonize the de…☆65Updated 2 years ago
- FLUX.1-dev LoRA Outfit Generator can create an outfit by detailing the color, pattern, fit, style, material, and type.☆70Updated 11 months ago
- [EG 2023] Sketch Video Synthesis☆218Updated last year
- ☆206Updated last year
- Official implementation of the ECCV paper "SwapAnything: Enabling Arbitrary Object Swapping in Personalized Visual Editing"☆265Updated last year
- Live2Diff: A Pipeline that processes Live video streams by a uni-directional video Diffusion model.☆198Updated last year
- sd3 dreambooth lora training book, adapted from the diffusers doc☆48Updated last year
- ☆47Updated last year
- ☆86Updated last year
- ☆61Updated 2 years ago
- An attempt at a SVD inpainting pipeline☆49Updated last year
- InteractiveVideo: User-Centric Controllable Video Generation with Synergistic Multimodal Instructions☆129Updated last year
- Code for Text2Performer. Paper: Text2Performer: Text-Driven Human Video Generation☆328Updated 2 years ago
- Controlling diffusion-based image generation with just a few strokes☆63Updated last year
- Apply controlnet to video clips☆81Updated last year
- MaTe3D: Mask-guided Text-based 3D-aware Portrait Editing☆98Updated last year
- [IJCV'24] AutoStory: Generating Diverse Storytelling Images with Minimal Human Effort☆151Updated 11 months ago
- GitHub repository for the paper 'Personalized Restoration via Dual-Pivot Tuning'.☆137Updated 10 months ago
- ☆128Updated last year
- Implementation of HyperDreamBooth: HyperNetworks for Fast Personalization of Text-to-Image Models☆173Updated 2 years ago
- [ICLR 2024] Github Repo for "HyperHuman: Hyper-Realistic Human Generation with Latent Structural Diffusion"☆496Updated 2 years ago
- Retrieval-Augmented Video Generation for Telling a Story☆258Updated last year
- ☆181Updated last year