SHI-Labs / Matting-AnythingLinks
Matting Anything Model (MAM), an efficient and versatile framework for estimating the alpha matte of any instance in an image with flexible and interactive visual or linguistic user prompt guidance.
☆675Updated last year
Alternatives and similar repositories for Matting-Anything
Users that are interested in Matting-Anything are comparing it to the libraries listed below
Sorting:
- [Image and Vision Computing (Vol.147 Jul. '24)] Interactive Natural Image Matting with Segment Anything Models☆548Updated last year
- [Information Fusion (Vol.103, Mar. '24)] Boosting Image Matting with Pretrained Plain Vision Transformers☆456Updated last month
- [CVPR 2024] PAIR Diffusion: A Comprehensive Multimodal Object-Level Image Editor☆520Updated last year
- [CVPR 2024] Official implementation of FreeDrag: Feature Dragging for Reliable Point-based Image Editing☆419Updated 5 months ago
- Prompt-Free Diffusion: Taking "Text" out of Text-to-Image Diffusion Models, arxiv 2023 / CVPR 2024☆757Updated last year
- Unoffical implement for [StyleDrop](https://arxiv.org/abs/2306.00983)☆583Updated 2 years ago
- ☆471Updated 2 months ago
- ICLR 2024 (Spotlight)☆774Updated last year
- Inpaint images with ControlNet☆367Updated last year
- [NeurIPS 2023] Uni-ControlNet: All-in-One Control to Text-to-Image Diffusion Models☆655Updated last year
- High-Resolution Image/Video Harmonization [ECCV 2022]☆371Updated 2 years ago
- Zero-Shot Video Editing Using Off-The-Shelf Image Diffusion Models☆355Updated 2 years ago
- Unified Controllable Visual Generation Model☆649Updated 7 months ago
- Paint by Example: Exemplar-based Image Editing with Diffusion Models☆1,214Updated last year
- ControlLoRA: A Lightweight Neural Network To Control Stable Diffusion Spatial Information☆608Updated last year
- [ICLR 2025] HD-Painter: High-Resolution and Prompt-Faithful Text-Guided Image Inpainting with Diffusion Models