geekyutao / Inpaint-Anything
Inpaint anything using Segment Anything and inpainting models.
☆7,017Updated last year
Alternatives and similar repositories for Inpaint-Anything:
Users that are interested in Inpaint-Anything are comparing it to the libraries listed below
- Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and …☆15,972Updated 6 months ago
- Edit anything in images powered by segment-anything, ControlNet, StableDiffusion, etc. (ACM MM)☆3,371Updated last month
- [ECCV 2024] Official implementation of the paper "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection"☆7,688Updated 7 months ago
- [NeurIPS 2023] Official implementation of the paper "Segment Everything Everywhere All at Once"☆4,533Updated 7 months ago
- Track-Anything is a flexible and interactive tool for video object tracking and segmentation, based on Segment Anything, XMem, and E2FGVI…☆6,660Updated 9 months ago
- Segment Anything for Stable Diffusion WebUI☆3,476Updated 10 months ago
- Open-source and strong foundation image recognition models.☆3,122Updated last month
- T2I-Adapter☆3,625Updated 9 months ago
- Nightly release of ControlNet 1.1☆4,940Updated 7 months ago
- 🦙 LaMa Image Inpainting, Resolution-robust Large Mask Inpainting with Fourier Convolutions, WACV 2022☆8,550Updated last month
- Segment Anything in High Quality [NeurIPS 2023]☆3,836Updated 3 months ago
- Official implementation of AnimateDiff.☆11,195Updated 7 months ago
- [ICCV 2023] Tune-A-Video: One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation☆4,313Updated last year
- PyTorch code and models for the DINOv2 self-supervised learning method.☆10,061Updated 7 months ago
- Using Low-rank adaptation to quickly fine-tune diffusion models.☆7,262Updated last year
- ☆7,770Updated 11 months ago
- Let us control diffusion models!☆31,805Updated last year
- The image prompt adapter is designed to enable a pretrained text-to-image diffusion model to generate images with image prompt.☆5,760Updated 8 months ago
- Open-Set Grounded Text-to-Image Generation☆2,098Updated last year
- Fast Segment Anything☆7,780Updated 7 months ago
- PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation☆5,121Updated 7 months ago
- Painter & SegGPT Series: Vision Foundation Models from BAAI☆2,559Updated 3 months ago
- Official repo for consistency models.☆6,289Updated last year
- [ECCV 2024] Official implementation of the paper "Semantic-SAM: Segment and Recognize Anything at Any Granularity"☆2,538Updated 8 months ago
- An open-source project dedicated to tracking and segmenting any objects in videos, either automatically or interactively. The primary alg…☆2,914Updated 11 months ago
- Image to prompt with BLIP and CLIP☆2,793Updated 10 months ago
- ☆6,569Updated last year
- Automated dense category annotation engine that serves as the initial semantic labeling for the Segment Anything dataset (SA-1B).☆2,217Updated last year
- [ICCV 2023 Oral] Text-to-Image Diffusion Models are Zero-Shot Video Generators☆4,152Updated last year
- The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoi…☆49,379Updated 6 months ago