SHI-Labs / Matting-Anything
Matting Anything Model (MAM), an efficient and versatile framework for estimating the alpha matte of any instance in an image with flexible and interactive visual or linguistic user prompt guidance.
☆657Updated last year
Alternatives and similar repositories for Matting-Anything:
Users that are interested in Matting-Anything are comparing it to the libraries listed below
- [Image and Vision Computing (Vol.147 Jul. '24)] Interactive Natural Image Matting with Segment Anything Models☆524Updated 10 months ago
- [Information Fusion (Vol.103, Mar. '24)] Boosting Image Matting with Pretrained Plain Vision Transformers☆417Updated 11 months ago
- [CVPR 2024] Official implementation of FreeDrag: Feature Dragging for Reliable Point-based Image Editing☆417Updated last week
- [CVPR 2024] PAIR Diffusion: A Comprehensive Multimodal Object-Level Image Editor☆515Updated last year
- Prompt-Free Diffusion: Taking "Text" out of Text-to-Image Diffusion Models, arxiv 2023 / CVPR 2024☆747Updated last year
- ICLR 2024 (Spotlight)☆764Updated last year
- ☆464Updated 7 months ago
- Inpaint images with ControlNet☆357Updated 10 months ago
- [ICCV 2023] Consistent Image Synthesis and Editing☆789Updated 8 months ago
- [NeurIPS 2023] Uni-ControlNet: All-in-One Control to Text-to-Image Diffusion Models☆642Updated 9 months ago
- Transfer the ControlNet with any basemodel in diffusers🔥☆828Updated 2 years ago
- [ICLR 2025] HD-Painter: High-Resolution and Prompt-Faithful Text-Guided Image Inpainting with Diffusion Models☆315Updated last year
- ControlLoRA: A Lightweight Neural Network To Control Stable Diffusion Spatial Information☆590Updated 8 months ago
- Zero-Shot Video Editing Using Off-The-Shelf Image Diffusion Models☆354Updated last year
- Official Implementation of "Control-A-Video: Controllable Text-to-Video Generation with Diffusion Models"☆390Updated last year
- [ICLR 2024] Github Repo for "HyperHuman: Hyper-Realistic Human Generation with Latent Structural Diffusion"☆497Updated last year
- Unoffical implement for [StyleDrop](https://arxiv.org/abs/2306.00983)☆578Updated last year
- ☆438Updated 5 months ago
- ☆228Updated last year
- Towards Unified Keyframe Propagation Models☆238Updated 2 years ago
- Paint by Example: Exemplar-based Image Editing with Diffusion Models☆1,175Updated last year
- A novel inpainting framework that can remove objects from images based on the instructions given as text prompts.☆375Updated last year
- 🦙 LaMa Image Inpainting, Resolution-robust Large Mask Inpainting with Fourier Convolutions, WACV 2022☆141Updated last year
- [ICLR 2024] Official pytorch implementation of "ControlVideo: Training-free Controllable Text-to-Video Generation"☆823Updated last year
- [ECCV 2024] DragAnything: Motion Control for Anything using Entity Representation☆489Updated 9 months ago
- Implementation of "SVDiff: Compact Parameter Space for Diffusion Fine-Tuning"☆379Updated last year
- [ECCV 2022] Flow-Guided Transformer for Video Inpainting☆325Updated last year
- Mixture of Diffusers for scene composition and high resolution image generation☆435Updated last year
- Official implementation of "DreamPose: Fashion Image-to-Video Synthesis via Stable Diffusion"☆1,000Updated last year
- Person Image Synthesis via Denoising Diffusion Model (CVPR 2023)☆492Updated 10 months ago