hustvl / Matte-AnythingLinks
[Image and Vision Computing (Vol.147 Jul. '24)] Interactive Natural Image Matting with Segment Anything Models
☆534Updated last year
Alternatives and similar repositories for Matte-Anything
Users that are interested in Matte-Anything are comparing it to the libraries listed below
Sorting:
- Matting Anything Model (MAM), an efficient and versatile framework for estimating the alpha matte of any instance in an image with flexib…☆667Updated last year
- [Information Fusion (Vol.103, Mar. '24)] Boosting Image Matting with Pretrained Plain Vision Transformers☆439Updated last year
- [CVPR 2024] Official implementation of FreeDrag: Feature Dragging for Reliable Point-based Image Editing☆418Updated 3 months ago
- ☆103Updated last year
- [ICLR 2025] HD-Painter: High-Resolution and Prompt-Faithful Text-Guided Image Inpainting with Diffusion Models☆326Updated last year
- [ECCV 2024] DragAnything: Motion Control for Anything using Entity Representation☆495Updated last year
- ZIM: Zero-Shot Image Matting for Anything☆294Updated 7 months ago
- ☆468Updated 3 weeks ago
- [CVPR 2024] PAIR Diffusion: A Comprehensive Multimodal Object-Level Image Editor☆516Updated last year
- High-Resolution Image/Video Harmonization [ECCV 2022]☆362Updated 2 years ago
- ICLR 2024 (Spotlight)☆772Updated last year
- [ECCV 2024] Official implementation of the paper "X-Pose: Detecting Any Keypoints"☆698Updated 10 months ago
- [ICLR 2024] Github Repo for "HyperHuman: Hyper-Realistic Human Generation with Latent Structural Diffusion"☆499Updated last year
- Official implementations for paper: LivePhoto: Real Image Animation with Text-guided Motion Control☆189Updated last year
- Official Implementation of "Control-A-Video: Controllable Text-to-Video Generation with Diffusion Models"☆396Updated 2 years ago
- Official PyTorch implementation for the paper "AnimateZero: Video Diffusion Models are Zero-Shot Image Animators"☆351Updated last year
- A novel inpainting framework that can remove objects from images based on the instructions given as text prompts.☆380Updated last year
- [NeurIPS 2023] Uni-ControlNet: All-in-One Control to Text-to-Image Diffusion Models☆650Updated 11 months ago
- [TMM 2025] StableIdentity: Inserting Anybody into Anywhere at First Sight 🔥☆258Updated 6 months ago
- Implementation of DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing☆227Updated last year
- Inpaint images with ControlNet☆362Updated last year
- ☆275Updated 11 months ago
- Official implementation of the ECCV paper "SwapAnything: Enabling Arbitrary Object Swapping in Personalized Visual Editing"☆257Updated 9 months ago
- [ICCV 2023] Consistent Image Synthesis and Editing☆799Updated 10 months ago
- Image composition toolbox: everything you want to know about image composition or object insertion☆642Updated 2 months ago
- Official implementation of CVPR 2024 paper: "FreeControl: Training-Free Spatial Control of Any Text-to-Image Diffusion Model with Any Con…☆467Updated 8 months ago
- Subject-Diffusion:Open Domain Personalized Text-to-Image Generation without Test-time Fine-tuning☆303Updated last year
- Prompt-Free Diffusion: Taking "Text" out of Text-to-Image Diffusion Models, arxiv 2023 / CVPR 2024☆753Updated last year
- A tool for efficient semi-supervised video object segmentation (great results with minimal manual labor) and a dataset for benchmarking☆199Updated last year
- [ICLR 2024 Spotlight] Official implementation of ScaleCrafter for higher-resolution visual generation at inference time.☆511Updated last year