[CVPR 2023] Referring Multi-Object Tracking
☆153Jul 2, 2024Updated last year
Alternatives and similar repositories for RMOT
Users that are interested in RMOT are comparing it to the libraries listed below
Sorting:
- ☆49Jun 19, 2024Updated last year
- Multi-Granularity Language-Guided Multi-Object Tracking☆24Nov 3, 2025Updated 4 months ago
- Localization-Guided Track: A Deep Association Multi-Object Tracking Framework Based on Localization Confidence of Detections☆41Jan 8, 2024Updated 2 years ago
- [CVPR 2024] iKUN: Speak to Trackers without Retraining☆145Jun 19, 2024Updated last year
- OVTrack: Open-Vocabulary Multiple Object Tracking [CVPR 2023]☆112Oct 14, 2024Updated last year
- Tracking Multiple Deformable Objects in Egocentric Videos (CVPR 2023)☆13Apr 10, 2023Updated 2 years ago
- [CVPR2023] MOTRv2: Bootstrapping End-to-End Multi-Object Tracking by Pretrained Object Detectors☆473Feb 28, 2023Updated 3 years ago
- [AAAI 2025] Language Prompt for Autonomous Driving☆155Sep 22, 2025Updated 5 months ago
- ☆21Dec 2, 2025Updated 3 months ago
- [ECCV2022] MOTR: End-to-End Multiple-Object Tracking with TRansformer☆783Jan 15, 2024Updated 2 years ago
- YOLOX Inference code for MOTRv2☆18Nov 23, 2022Updated 3 years ago
- CO-MOT: Bridging the Gap Between End-to-end and Non-End-to-end Multi-Object Tracking☆102Dec 15, 2025Updated 2 months ago
- [ECCV 2024] Beyond MOT: Semantic Multi-Object Tracking☆29Sep 12, 2024Updated last year
- [ICCV 2023] OnlineRefer: A Simple Online Baseline for Referring Video Object Segmentation☆58Oct 7, 2023Updated 2 years ago
- Repository for GHOST: Simple Cues Lead to a Strong Multi-Object Tracker (CVPR 2023)☆125Feb 14, 2024Updated 2 years ago
- The official pytorch implementation of our AAAI 2024 paper "Unifying Visual and Vision-Language Tracking via Contrastive Learning"