SHI-Labs / Matting-Anything
Matting Anything Model (MAM), an efficient and versatile framework for estimating the alpha matte of any instance in an image with flexible and interactive visual or linguistic user prompt guidance.
☆650Updated last year
Alternatives and similar repositories for Matting-Anything:
Users that are interested in Matting-Anything are comparing it to the libraries listed below
- [Image and Vision Computing (Vol.147 Jul. '24)] Interactive Natural Image Matting with Segment Anything Models☆523Updated 9 months ago
- [Information Fusion (Vol.103, Mar. '24)] Boosting Image Matting with Pretrained Plain Vision Transformers☆408Updated 10 months ago
- [CVPR 2024] PAIR Diffusion: A Comprehensive Multimodal Object-Level Image Editor☆514Updated last year
- [CVPR 2024] Official Implementation of FreeDrag: Feature Dragging for Reliable Point-based Image Editing☆418Updated 2 months ago
- Prompt-Free Diffusion: Taking "Text" out of Text-to-Image Diffusion Models, arxiv 2023 / CVPR 2024☆746Updated last year
- ☆464Updated 6 months ago
- Inpaint images with ControlNet☆358Updated 9 months ago
- ControlLoRA: A Lightweight Neural Network To Control Stable Diffusion Spatial Information☆588Updated 8 months ago
- ICLR 2024 (Spotlight)☆750Updated last year
- [NeurIPS 2023] Uni-ControlNet: All-in-One Control to Text-to-Image Diffusion Models☆638Updated 8 months ago
- Official Pytorch Implementation of DenseDiffusion (ICCV 2023)☆492Updated last year
- [ICCV 2023] Consistent Image Synthesis and Editing☆781Updated 7 months ago
- Zero-Shot Video Editing Using Off-The-Shelf Image Diffusion Models☆354Updated last year
- Unoffical implement for [StyleDrop](https://arxiv.org/abs/2306.00983)☆577Updated last year
- [CVPR2023] Blind Video Deflickering by Neural Filtering with a Flawed Atlas☆725Updated last year
- [ICLR 2025] HD-Painter: High-Resolution and Prompt-Faithful Text-Guided Image Inpainting with Diffusion Models☆313Updated last year
- Official implementation of CVPR 2024 paper: "FreeControl: Training-Free Spatial Control of Any Text-to-Image Diffusion Model with Any Con…☆464Updated 5 months ago
- [ICLR 2024] Github Repo for "HyperHuman: Hyper-Realistic Human Generation with Latent Structural Diffusion"☆497Updated last year
- [ECCV 2024] DragAnything: Motion Control for Anything using Entity Representation☆488Updated 9 months ago
- ELITE: Encoding Visual Concepts into Textual Embeddings for Customized Text-to-Image Generation (ICCV 2023, Oral)☆534Updated last year
- [ICLR 2024] Official pytorch implementation of "ControlVideo: Training-free Controllable Text-to-Video Generation"☆817Updated last year
- Implementation of DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing☆227Updated last year
- Official Implementation of "Control-A-Video: Controllable Text-to-Video Generation with Diffusion Models"☆389Updated last year
- Transfer the ControlNet with any basemodel in diffusers🔥☆823Updated last year
- Officail Implementation for "Cross-Image Attention for Zero-Shot Appearance Transfer"☆359Updated 10 months ago
- Subject-Diffusion:Open Domain Personalized Text-to-Image Generation without Test-time Fine-tuning☆293Updated 8 months ago
- ☆432Updated 4 months ago
- [ICLR 2024 Spotlight] Official implementation of ScaleCrafter for higher-resolution visual generation at inference time.☆507Updated last year
- Unified Controllable Visual Generation Model☆636Updated 2 months ago
- [IJCV] FastComposer: Tuning-Free Multi-Subject Image Generation with Localized Attention☆696Updated 2 months ago