SHI-Labs / Matting-AnythingLinks
Matting Anything Model (MAM), an efficient and versatile framework for estimating the alpha matte of any instance in an image with flexible and interactive visual or linguistic user prompt guidance.
☆677Updated last year
Alternatives and similar repositories for Matting-Anything
Users that are interested in Matting-Anything are comparing it to the libraries listed below
Sorting:
- [Image and Vision Computing (Vol.147 Jul. '24)] Interactive Natural Image Matting with Segment Anything Models☆552Updated last year
- [Information Fusion (Vol.103, Mar. '24)] Boosting Image Matting with Pretrained Plain Vision Transformers☆462Updated last month
- [CVPR 2024] PAIR Diffusion: A Comprehensive Multimodal Object-Level Image Editor☆519Updated last year
- [CVPR 2024] Official implementation of FreeDrag: Feature Dragging for Reliable Point-based Image Editing☆419Updated 5 months ago
- Prompt-Free Diffusion: Taking "Text" out of Text-to-Image Diffusion Models, arxiv 2023 / CVPR 2024☆760Updated last year
- High-Resolution Image/Video Harmonization [ECCV 2022]☆373Updated 2 years ago
- ☆472Updated 3 months ago
- ControlLoRA: A Lightweight Neural Network To Control Stable Diffusion Spatial Information☆612Updated last year
- [NeurIPS 2023] Uni-ControlNet: All-in-One Control to Text-to-Image Diffusion Models☆655Updated last year
- ICLR 2024 (Spotlight)☆774Updated last year
- Unified Controllable Visual Generation Model☆647Updated 8 months ago
- [ICCV 2023] Consistent Image Synthesis and Editing☆811Updated last year
- Inpaint images with ControlNet☆368Updated last year
- ☆233Updated 2 years ago
- Transfer the ControlNet with any basemodel in diffusers🔥☆842Updated 2 years ago
- [CVPR2023] Blind Video Deflickering by Neural Filtering with a Flawed Atlas☆743Updated 4 months ago
- Official code for "Towards An End-to-End Framework for Flow-Guided Video Inpainting" (CVPR2022)☆1,102Updated 2 years ago
- Unoffical implement for [StyleDrop](https://arxiv.org/abs/2306.00983)☆582Updated 2 years ago
- Paint by Example: Exemplar-based Image Editing with Diffusion Models☆1,221Updated last year
- [CVPR2024, Highlight] Official code for DragDiffusion☆1,235Updated last year
- [ICLR 2025] HD-Painter: High-Resolution and Prompt-Faithful Text-Guided Image Inpainting with Diffusion Models☆341Updated last year
- [ICLR 2024] Official pytorch implementation of "ControlVideo: Training-free Controllable Text-to-Video Generation"☆845Updated last year
- Zero-Shot Video Editing Using Off-The-Shelf Image Diffusion Models☆356Updated 2 years ago
- [ECCV 2024] Official implementation of the paper "X-Pose: Detecting Any Keypoints"☆725Updated last year
- [ECCV 2022] Flow-Guided Transformer for Video Inpainting☆335Updated last year
- Official PyTorch implementation for the paper "AnimateZero: Video Diffusion Models are Zero-Shot Image Animators"☆351Updated last year
- Stable diffusion for inpainting☆215Updated 2 years ago
- Implementation of DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing☆227Updated 2 years ago
- [ECCV 2024] DragAnything: Motion Control for Anything using Entity Representation☆499Updated last year
- [ICLR 2024] Github Repo for "HyperHuman: Hyper-Realistic Human Generation with Latent Structural Diffusion"☆497Updated last year