IDEA-Research / MaskDINOLinks
[CVPR 2023] Official implementation of the paper "Mask DINO: Towards A Unified Transformer-based Framework for Object Detection and Segmentation"
☆1,365Updated last year
Alternatives and similar repositories for MaskDINO
Users that are interested in MaskDINO are comparing it to the libraries listed below
Sorting:
- [ICLR 2023 Spotlight] Vision Transformer Adapter for Dense Predictions☆1,388Updated last month
- Code release for "Cut and Learn for Unsupervised Object Detection and Instance Segmentation" and "VideoCutLER: Surprisingly Simple Unsupe…☆1,023Updated last month
- [ICCV 2023] Official implementation of the paper "A Simple Framework for Open-Vocabulary Segmentation and Detection"☆724Updated last year
- Fine-tune SAM (Segment Anything Model) for computer vision tasks such as semantic segmentation, matting, detection ... in specific scena…☆842Updated last year
- Fine-tune Segment-Anything Model with Lightning Fabric.☆552Updated last year
- detrex is a research platform for DETR-based object detection, segmentation, pose estimation and other visual recognition tasks.☆2,197Updated 11 months ago
- [ICCV 2023] DETRs with Collaborative Hybrid Assignments Training☆1,267Updated 7 months ago
- [CVPR 2023] OneFormer: One Transformer to Rule Universal Image Segmentation☆1,647Updated 9 months ago
- This is the official PyTorch implementation of the paper Open-Vocabulary Semantic Segmentation with Mask-adapted CLIP.☆729Updated last year
- Collect some papers about transformer for detection and segmentation. Awesome Detection Transformer for Computer Vision (CV)☆1,363Updated last year
- Code release for "Masked-attention Mask Transformer for Universal Image Segmentation"☆2,900Updated last year
- [ICLR 2023] Official implementation of the paper "DINO: DETR with Improved DeNoising Anchor Boxes for End-to-End Object Detection"☆2,576Updated last year
- [ECCV 2024] Official implementation of the paper "Semantic-SAM: Segment and Recognize Anything at Any Granularity"☆2,684Updated 3 weeks ago
- Adapting Meta AI's Segment Anything to Downstream Tasks with Adapters and Prompts☆1,225Updated 7 months ago
- Per-Pixel Classification is Not All You Need for Semantic Segmentation (NeurIPS 2021, spotlight)☆1,418Updated 3 years ago
- Grounded Language-Image Pre-training☆2,472Updated last year
- This is the third party implementation of the paper Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detectio…☆650Updated this week
- Official PyTorch implementation of ODISE: Open-Vocabulary Panoptic Segmentation with Text-to-Image Diffusion Models [CVPR 2023 Highlight]☆916Updated last year
- This repository is for the first comprehensive survey on Meta AI's Segment Anything Model (SAM).☆985Updated this week
- [ICLR'24] Matcher: Segment Anything with One Shot Using All-Purpose Feature Matching☆508Updated 7 months ago
- A central hub for gathering and showcasing amazing projects that extend OpenMMLab with SAM and other exciting features.☆1,207Updated last year
- [CVPR 2023 Highlight] InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions☆2,704Updated 4 months ago
- We developed a python UI based on labelme and segment-anything for pixel-level annotation. It support multiple masks generation by SAM(bo…☆378Updated last year
- [ICLR 2024] Official PyTorch implementation of FasterViT: Fast Vision Transformers with Hierarchical Attention☆867Updated last week
- [NeurIPS 2022] Official code for "Focal Modulation Networks"☆740Updated last year
- [CVPR 2022] Official code for "RegionCLIP: Region-based Language-Image Pretraining"☆777Updated last year
- Code release for our CVPR 2023 paper "Detecting Everything in the Open World: Towards Universal Object Detection".☆578Updated 2 years ago
- Downstream-Dino-V2: A GitHub repository featuring an easy-to-use implementation of the DINOv2 model by Facebook for downstream tasks such…☆246Updated 2 years ago
- Hiera: A fast, powerful, and simple hierarchical vision transformer.☆1,008Updated last year
- [CVPR 2024] Official implementation of the paper "Visual In-context Learning"☆482Updated last year