siyuanliii / masaView external linksLinks
Official Implementation of CVPR24 highlight paper: Matching Anything by Segmenting Anything
☆1,362May 1, 2025Updated 9 months ago
Alternatives and similar repositories for masa
Users that are interested in masa are comparing it to the libraries listed below
Sorting:
- [CVPR 2024] Real-Time Open-Vocabulary Object Detection☆6,208Feb 26, 2025Updated 11 months ago
- [ICCV 2023] Tracking Anything with Decoupled Video Segmentation☆1,485Apr 26, 2025Updated 9 months ago
- OVTrack: Open-Vocabulary Multiple Object Tracking [CVPR 2023]☆112Oct 14, 2024Updated last year
- [ECCV2024] API code for T-Rex2: Towards Generic Object Detection via Text-Visual Prompt Synergy☆2,632Oct 15, 2025Updated 4 months ago
- Official Pytorch Implementation for “DINO-Tracker: Taming DINO for Self-Supervised Point Tracking in a Single Video” (ECCV 2024)☆550Nov 23, 2024Updated last year
- The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained mode…☆18,477Dec 25, 2024Updated last year
- Efficient Track Anything☆776Jan 6, 2025Updated last year
- EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment Anything☆2,466Dec 24, 2024Updated last year
- Code release for CVPR'24 submission 'OmniGlue'☆695Aug 12, 2024Updated last year
- Implementation of XFeat (CVPR 2024). Do you need robust and fast local feature extraction? You are in the right place!☆1,535Jan 15, 2025Updated last year
- CoTracker is a model for tracking any point (pixel) on a video.☆4,820Jan 21, 2025Updated last year
- Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and …☆17,397Sep 5, 2024Updated last year
- [ECCV 2024] Official implementation of the paper "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection"☆9,725Aug 12, 2024Updated last year
- [CVPR 2024] Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data. Foundation Model for Monocular Depth Estimation☆8,000Jul 17, 2024Updated last year
- Official Implementation of ECCV2024 paper: SLAck☆29Sep 18, 2024Updated last year
- [CVPR 2025] Multiple Object Tracking as ID Prediction☆472Aug 20, 2025Updated 5 months ago
- Grounding DINO 1.5: IDEA Research's Most Capable Open-World Object Detection Model Series☆1,086Jan 21, 2025Updated last year
- Grounded SAM 2: Ground and Track Anything in Videos with Grounding DINO, Florence-2 and SAM 2☆3,265Nov 11, 2025Updated 3 months ago
- Efficient vision foundation models for high-resolution generation and perception.☆3,236Sep 5, 2025Updated 5 months ago
- Code for "MatchAnything: Universal Cross-Modality Image Matching with Large-Scale Pre-Training", Arxiv 2025.☆1,214Dec 26, 2025Updated last month
- This is the official code for MobileSAM project that makes SAM lightweight for mobile applications and beyond!☆5,622Dec 19, 2025Updated last month
- [CVPR2024 Highlight]GLEE: General Object Foundation Model for Images and Videos at Scale☆1,169Oct 21, 2024Updated last year
- [CVPR 2024] Official RT-DETR (RTDETR paddle pytorch), Real-Time DEtection TRansformer, DETRs Beat YOLOs on Real-time Object Detection. 🔥…☆4,841Dec 3, 2025Updated 2 months ago
- [NeurIPS 2024] Depth Anything V2. A More Capable Foundation Model for Monocular Depth Estimation☆7,579Jan 22, 2025Updated last year
- [CVPR 2025] "A Distractor-Aware Memory for Visual Object Tracking with SAM2"☆458Oct 23, 2025Updated 3 months ago
- [CVPR2023] MOTRv2: Bootstrapping End-to-End Multi-Object Tracking by Pretrained Object Detectors☆468Feb 28, 2023Updated 2 years ago
- Official repository of "SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory"☆7,042Mar 18, 2025Updated 10 months ago
- Official repository for "AM-RADIO: Reduce All Domains Into One"☆1,634Updated this week
- YOLOE: Real-Time Seeing Anything [ICCV 2025]☆2,037Jun 26, 2025Updated 7 months ago
- [CVPR'24 & TPAMI'26] Area to Point Matching Framework☆157Jan 19, 2026Updated 3 weeks ago
- PyTorch code and models for the DINOv2 self-supervised learning method.☆12,393Dec 22, 2025Updated last month
- DINO-X: The World's Top-Performing Vision Model for Open-World Object Detection and Understanding☆1,334Jul 23, 2025Updated 6 months ago
- [ECCV 2024] Official implementation of the paper "Semantic-SAM: Segment and Recognize Anything at Any Granularity"☆2,808Jul 10, 2025Updated 7 months ago
- Segment Anything in High Quality [NeurIPS 2023]☆4,177Sep 12, 2025Updated 5 months ago
- [NeurIPS 2024] Code release for "Segment Anything without Supervision"☆497Nov 20, 2025Updated 2 months ago
- Official PyTorch implementation of "EdgeSAM: Prompt-In-the-Loop Distillation for On-Device Deployment of SAM"☆1,115May 24, 2025Updated 8 months ago
- [CVPR 2024] Official implementation of the paper "Visual In-context Learning"☆529Apr 8, 2024Updated last year
- ☆2,264Jun 11, 2024Updated last year
- LightGlue: Local Feature Matching at Light Speed (ICCV 2023)☆4,352Jul 9, 2025Updated 7 months ago