SHI-Labs / Matting-AnythingLinks
Matting Anything Model (MAM), an efficient and versatile framework for estimating the alpha matte of any instance in an image with flexible and interactive visual or linguistic user prompt guidance.
☆697Updated 2 years ago
Alternatives and similar repositories for Matting-Anything
Users that are interested in Matting-Anything are comparing it to the libraries listed below
Sorting:
- [Image and Vision Computing (Vol.147 Jul. '24)] Interactive Natural Image Matting with Segment Anything Models☆571Updated last year
- [Information Fusion (Vol.103, Mar. '24)] Boosting Image Matting with Pretrained Plain Vision Transformers☆506Updated 5 months ago
- [CVPR 2024] Official implementation of FreeDrag: Feature Dragging for Reliable Point-based Image Editing☆422Updated 9 months ago
- [CVPR 2024] PAIR Diffusion: A Comprehensive Multimodal Object-Level Image Editor☆521Updated last year
- Inpaint images with ControlNet☆374Updated last year
- Prompt-Free Diffusion: Taking "Text" out of Text-to-Image Diffusion Models, arxiv 2023 / CVPR 2024☆758Updated 2 years ago
- ☆474Updated 7 months ago
- [NeurIPS 2023] Uni-ControlNet: All-in-One Control to Text-to-Image Diffusion Models☆667Updated last year
- ICLR 2024 (Spotlight)☆784Updated last year
- ControlLoRA: A Lightweight Neural Network To Control Stable Diffusion Spatial Information☆621Updated last year
- Unified Controllable Visual Generation Model☆657Updated last year
- Unoffical implement for [StyleDrop](https://arxiv.org/abs/2306.00983)☆585Updated 2 years ago
- [ICLR 2025] HD-Painter: High-Resolution and Prompt-Faithful Text-Guided Image Inpainting with Diffusion Models☆353Updated last year
- ☆238Updated 2 years ago
- High-Resolution Image/Video Harmonization [ECCV 2022]☆390Updated 3 years ago
- [ICCV 2023] Consistent Image Synthesis and Editing☆836Updated last year
- [CVPR2023] Blind Video Deflickering by Neural Filtering with a Flawed Atlas☆753Updated 8 months ago
- Official code for "Towards An End-to-End Framework for Flow-Guided Video Inpainting" (CVPR2022)☆1,130Updated 2 years ago
- [ICLR 2024] Official pytorch implementation of "ControlVideo: Training-free Controllable Text-to-Video Generation"☆857Updated 2 years ago
- A novel inpainting framework that can remove objects from images based on the instructions given as text prompts.☆382Updated 2 months ago
- [CVPR2024, Highlight] Official code for DragDiffusion☆1,249Updated 2 years ago
- Paint by Example: Exemplar-based Image Editing with Diffusion Models☆1,244Updated 2 years ago
- Official Implementation of "Control-A-Video: Controllable Text-to-Video Generation with Diffusion Models"☆402Updated 2 years ago
- [ECCV 2022] Flow-Guided Transformer for Video Inpainting☆348Updated 2 years ago
- Official implementation of "DreamPose: Fashion Image-to-Video Synthesis via Stable Diffusion"☆1,010Updated 2 years ago
- Official Pytorch Implementation for "MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation" presenting "MultiDiffusion" …☆1,058Updated 2 years ago
- 🦙 LaMa Image Inpainting, Resolution-robust Large Mask Inpainting with Fourier Convolutions, WACV 2022☆156Updated 2 years ago
- [ECCV 2024] DragAnything: Motion Control for Anything using Entity Representation☆505Updated last year
- [ICLR 2024] Github Repo for "HyperHuman: Hyper-Realistic Human Generation with Latent Structural Diffusion"☆498Updated 2 years ago
- Zero-Shot Video Editing Using Off-The-Shelf Image Diffusion Models☆357Updated 2 years ago