SHI-Labs / Matting-Anything
Matting Anything Model (MAM), an efficient and versatile framework for estimating the alpha matte of any instance in an image with flexible and interactive visual or linguistic user prompt guidance.
☆649Updated last year
Alternatives and similar repositories for Matting-Anything:
Users that are interested in Matting-Anything are comparing it to the libraries listed below
- [Image and Vision Computing (Vol.147 Jul. '24)] Interactive Natural Image Matting with Segment Anything Models☆521Updated 9 months ago
- [Information Fusion (Vol.103, Mar. '24)] Boosting Image Matting with Pretrained Plain Vision Transformers☆400Updated 9 months ago
- Prompt-Free Diffusion: Taking "Text" out of Text-to-Image Diffusion Models, arxiv 2023 / CVPR 2024☆746Updated last year
- [CVPR 2024] PAIR Diffusion: A Comprehensive Multimodal Object-Level Image Editor☆513Updated 11 months ago
- [CVPR 2024] Official Implementation of FreeDrag: Feature Dragging for Reliable Point-based Image Editing☆417Updated last month
- ☆460Updated 6 months ago
- [NeurIPS 2023] Uni-ControlNet: All-in-One Control to Text-to-Image Diffusion Models☆636Updated 8 months ago
- [ICLR 2025] HD-Painter: High-Resolution and Prompt-Faithful Text-Guided Image Inpainting with Diffusion Models☆310Updated last year
- Official Implementation of "Control-A-Video: Controllable Text-to-Video Generation with Diffusion Models"☆387Updated last year
- [ICLR 2024] Official pytorch implementation of "ControlVideo: Training-free Controllable Text-to-Video Generation"☆816Updated last year
- ICLR 2024 (Spotlight)☆748Updated last year
- Zero-Shot Video Editing Using Off-The-Shelf Image Diffusion Models☆354Updated last year
- Transfer the ControlNet with any basemodel in diffusers🔥☆822Updated last year
- ControlLoRA: A Lightweight Neural Network To Control Stable Diffusion Spatial Information☆587Updated 7 months ago
- Unoffical implement for [StyleDrop](https://arxiv.org/abs/2306.00983)☆578Updated last year
- Inpaint images with ControlNet☆356Updated 9 months ago
- ELITE: Encoding Visual Concepts into Textual Embeddings for Customized Text-to-Image Generation (ICCV 2023, Oral)☆533Updated last year
- [ICCV 2023] Consistent Image Synthesis and Editing☆779Updated 7 months ago
- [ECCV 2024] DragAnything: Motion Control for Anything using Entity Representation☆485Updated 8 months ago
- [CVPR2023] Blind Video Deflickering by Neural Filtering with a Flawed Atlas☆724Updated last year
- Person Image Synthesis via Denoising Diffusion Model (CVPR 2023)☆491Updated 9 months ago
- Official Pytorch Implementation of DenseDiffusion (ICCV 2023)☆491Updated last year
- ☆431Updated 3 months ago
- Official implementation of CVPR 2024 paper: "FreeControl: Training-Free Spatial Control of Any Text-to-Image Diffusion Model with Any Con…☆462Updated 5 months ago
- Unified Controllable Visual Generation Model☆636Updated last month
- [ECCV 2024] Official implementation of the paper "X-Pose: Detecting Any Keypoints"☆644Updated 7 months ago
- [ICLR 2024] Github Repo for "HyperHuman: Hyper-Realistic Human Generation with Latent Structural Diffusion"☆497Updated last year
- [ICCV 2023] StyleGANEX: StyleGAN-Based Manipulation Beyond Cropped Aligned Faces☆519Updated last year
- Official Pytorch Implementation for "MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation" presenting "MultiDiffusion" …☆1,022Updated last year
- Officail Implementation for "Cross-Image Attention for Zero-Shot Appearance Transfer"☆356Updated 10 months ago