licksylick / AutoTrackAnythingLinks
AutoTrackAnything is a universal, flexible and interactive tool for insane automatic object tracking over thousands of frames. It is developed upon XMem, Yolov8 and MobileSAM (Segment Anything), can track anything which detect Yolov8.
☆92Updated last year
Alternatives and similar repositories for AutoTrackAnything
Users that are interested in AutoTrackAnything are comparing it to the libraries listed below
Sorting:
- Official Code for "MITracker: Multi-View Integration for Visual Object Tracking"☆122Updated 7 months ago
- [AAAI 2026] Code for "SAM2MOT: A Novel Paradigm of Multi-Object Tracking by Segmentation".☆158Updated 2 months ago
- YOLO-World + EfficientViT SAM☆106Updated last year
- 🏄 [ICLR 2025] OVTR: End-to-End Open-Vocabulary Multiple Object Tracking with Transformer☆86Updated 5 months ago
- Combining "segment-anything" with MOT, it create the era of "MOTS"☆156Updated 2 years ago
- [ICCV 2023] ReST: A Reconfigurable Spatial-Temporal Graph Model for Multi-Camera Multi-Object Tracking☆165Updated last year
- Official code for NetTrack [CVPR 2024]☆111Updated last year
- [ICCV2023] MixSort: The Customized Tracker in SportsMOT☆90Updated 2 years ago
- [CVPR 2025] "A Distractor-Aware Memory for Visual Object Tracking with SAM2"☆453Updated 3 months ago
- [ICCV2025] Referring any person or objects given a natural language description. Code base for RexSeek and HumanRef Benchmark☆177Updated 3 months ago
- Focusing on Tracks for Online Multi-Object Tracking☆89Updated 4 months ago
- Official Code for Tracking Any Object Amodally☆120Updated last year
- [ECCV24] Keypoint Promptable Re-Identification: SOTA ReID method robust to occlusions and multi-person ambiguity☆188Updated 7 months ago
- Official implementation of 🐫 CAMELTrack: Context-Aware Multi-cue ExpLoitation for Online Multi-Object Tracking 🐫☆102Updated last week
- Use Segment Anything 2, grounded with Florence-2, to auto-label data for use in training vision models.☆134Updated last year
- OVTrack: Open-Vocabulary Multiple Object Tracking [CVPR 2023]☆112Updated last year
- [ICLR 2025 oral] RMP-SAM: Towards Real-Time Multi-Purpose Segment Anything☆268Updated 9 months ago
- Includes the VideoCount dataset and CountVid code for the paper Open-World Object Counting in Videos.☆86Updated last month
- Codebase for the Recognize Anything Model (RAM)☆88Updated 2 years ago
- ☆83Updated last month
- Official code for CAVIS: Context-Aware Video Instance Segmentation☆95Updated 4 months ago
- A Graph-Based Approach for Category-Agnostic Pose Estimation [ECCV 2024]☆384Updated last year
- Official Pytorch Implementation for “DINO-Tracker: Taming DINO for Self-Supervised Point Tracking in a Single Video” (ECCV 2024)☆548Updated last year
- Official PyTorch implementation of SparseTrack☆162Updated 10 months ago
- yolov8 model with SAM meta☆142Updated 2 years ago
- DETRPose: Real-time end-to-end transformer model for multi-person pose estimation☆68Updated last month
- DVIS: Decoupled Video Instance Segmentation Framework☆158Updated last year
- Implementation of Tracking Every Thing in the Wild, ECCV 2022☆96Updated last year
- [ECCV 2024 & NeurIPS 2024] Official implementation of the paper TAPTR & TAPTRv2 & TAPTRv3☆270Updated last year
- [CVPR 2025] Multiple Object Tracking as ID Prediction☆459Updated 5 months ago