hustvl / Matte-AnythingLinks
[Image and Vision Computing (Vol.147 Jul. '24)] Interactive Natural Image Matting with Segment Anything Models
☆548Updated last year
Alternatives and similar repositories for Matte-Anything
Users that are interested in Matte-Anything are comparing it to the libraries listed below
Sorting:
- Matting Anything Model (MAM), an efficient and versatile framework for estimating the alpha matte of any instance in an image with flexib…☆675Updated last year
- [Information Fusion (Vol.103, Mar. '24)] Boosting Image Matting with Pretrained Plain Vision Transformers☆456Updated last month
- ☆104Updated last year
- [CVPR 2024] Official implementation of FreeDrag: Feature Dragging for Reliable Point-based Image Editing☆419Updated 5 months ago
- [ICLR 2025] HD-Painter: High-Resolution and Prompt-Faithful Text-Guided Image Inpainting with Diffusion Models☆335Updated last year
- [CVPR 2024] PAIR Diffusion: A Comprehensive Multimodal Object-Level Image Editor☆520Updated last year
- [ICCV 2025, Highlight] ZIM: Zero-Shot Image Matting for Anything☆346Updated 2 weeks ago
- [ECCV 2024] Official implementation of the paper "X-Pose: Detecting Any Keypoints"☆717Updated last year
- Inpaint images with ControlNet☆367Updated last year
- [ICLR 2024] Github Repo for "HyperHuman: Hyper-Realistic Human Generation with Latent Structural Diffusion"☆496Updated last year
- [ECCV 2024] DragAnything: Motion Control for Anything using Entity Representation☆500Updated last year
- High-Resolution Image/Video Harmonization [ECCV 2022]☆371Updated 2 years ago
- ☆471Updated 2 months ago
- ICLR 2024 (Spotlight)☆774Updated last year
- Prompt-Free Diffusion: Taking "Text" out of Text-to-Image Diffusion Models, arxiv 2023 / CVPR 2024☆757Updated last year
- A novel inpainting framework that can remove objects from images based on the instructions given as text prompts.☆381Updated 2 years ago
- Official implementations for paper: LivePhoto: Real Image Animation with Text-guided Motion Control☆189Updated last year
- ☆233Updated last year
- Official PyTorch implementation for the paper "AnimateZero: Video Diffusion Models are Zero-Shot Image Animators"☆352Updated last year
- Official Implementation of "Control-A-Video: Controllable Text-to-Video Generation with Diffusion Models"☆398Updated 2 years ago
- Implementation of DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing☆227Updated 2 years ago
- A tool for efficient semi-supervised video object segmentation (great results with minimal manual labor) and a dataset for benchmarking☆202Updated last year
- [CVPR 2024] code release for "DiffusionLight: Light Probes for Free by Painting a Chrome Ball"☆678Updated 8 months ago
- Zero-Shot Video Editing Using Off-The-Shelf Image Diffusion Models☆355Updated 2 years ago
- [ICLR 2024 Spotlight] Official implementation of ScaleCrafter for higher-resolution visual generation at inference time.☆507Updated last year
- Official implementation of the ECCV paper "SwapAnything: Enabling Arbitrary Object Swapping in Personalized Visual Editing"☆264Updated 11 months ago
- Stable diffusion for inpainting☆212Updated 2 years ago
- ☆56Updated 2 years ago
- [NeurIPS 2023] Uni-ControlNet: All-in-One Control to Text-to-Image Diffusion Models☆655Updated last year
- Image composition toolbox: everything you want to know about image composition or object insertion☆664Updated last week