Efficient Track Anything
☆780Jan 6, 2025Updated last year
Alternatives and similar repositories for EfficientTAM
Users that are interested in EfficientTAM are comparing it to the libraries listed below
Sorting:
- [CVPR 2025] "A Distractor-Aware Memory for Visual Object Tracking with SAM2"☆459Oct 23, 2025Updated 4 months ago
- ☆42Aug 18, 2025Updated 6 months ago
- Official Implementation of CVPR24 highlight paper: Matching Anything by Segmenting Anything☆1,364May 1, 2025Updated 10 months ago
- Official repository of "SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory"☆7,043Mar 18, 2025Updated 11 months ago
- EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment Anything☆2,466Dec 24, 2024Updated last year
- The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained mode…☆18,560Dec 25, 2024Updated last year
- [CVPR 2025] Official PyTorch implementation of "EdgeTAM: On-Device Track Anything Model"☆873Jan 27, 2026Updated last month
- Grounded SAM 2: Ground and Track Anything in Videos with Grounding DINO, Florence-2 and SAM 2☆3,285Nov 11, 2025Updated 3 months ago
- [ECCV 2024 & NeurIPS 2024 & ICLR 2026] Official implementation of the paper TAPTR & TAPTRv2 & TAPTRv3☆274Feb 10, 2026Updated 2 weeks ago
- [CVPR 2025] RollingDepth: Video Depth without Video Models☆603Mar 18, 2025Updated 11 months ago
- [ICCV 2025] SAM2Long: Enhancing SAM 2 for Long Video Segmentation with a Training-Free Memory Tree☆549Jul 29, 2025Updated 7 months ago
- [ICCV 2025, Highlight] ZIM: Zero-Shot Image Matting for Anything☆400Aug 28, 2025Updated 6 months ago
- This is the official implementation of work HiM2SAM in PRCV25.☆25Aug 30, 2025Updated 6 months ago
- RobustSAM: Segment Anything Robustly on Degraded Images (CVPR 2024 Highlight)☆364Aug 31, 2024Updated last year
- Official PyTorch implementation of "EdgeSAM: Prompt-In-the-Loop Distillation for On-Device Deployment of SAM"☆1,117May 24, 2025Updated 9 months ago
- DINO-X: The World's Top-Performing Vision Model for Open-World Object Detection and Understanding☆1,337Jul 23, 2025Updated 7 months ago
- Pytorch Implementation of "SMITE: Segment Me In TimE" (ICLR 2025)☆212Nov 12, 2025Updated 3 months ago
- 🔥 Latest advances in Video Object Segmentation (VOS) – papers, datasets, and projects.☆468Feb 18, 2026Updated last week
- Tracking Any Point (TAP)☆1,799Jan 22, 2026Updated last month
- Muggled SAM: Segmentation without the magic☆196Updated this week
- [CVPR 2024 Highlight] Putting the Object Back Into Video Object Segmentation☆1,014Nov 8, 2024Updated last year
- Official code of "EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model"☆497Mar 17, 2025Updated 11 months ago
- [CVPR 2025 Highlight] Video Depth Anything: Consistent Depth Estimation for Super-Long Videos☆1,771Oct 7, 2025Updated 4 months ago
- OneVOS: Unifying Video Object Segmentation with All-in-One Transformer Framework☆12Feb 27, 2025Updated last year
- CoTracker is a model for tracking any point (pixel) on a video.☆4,835Jan 21, 2025Updated last year
- [NeurIPS 2024] Depth Anything V2. A More Capable Foundation Model for Monocular Depth Estimation☆7,614Jan 22, 2025Updated last year
- This is the official code for MobileSAM project that makes SAM lightweight for mobile applications and beyond!☆5,631Dec 19, 2025Updated 2 months ago
- Official Repo For Pixel-LLM Codebase☆1,543Jan 23, 2026Updated last month
- Efficient vision foundation models for high-resolution generation and perception.☆3,243Sep 5, 2025Updated 5 months ago
- [CVPR 2025] Official code for Using Diffusion Priors for Video Amodal Segmentation☆111Nov 13, 2025Updated 3 months ago
- code for CVPR2024 paper: DiffMOT: A Real-time Diffusion-based Multiple Object Tracker with Non-linear Prediction☆441Jun 13, 2024Updated last year
- [CVPR 2024] Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data. Foundation Model for Monocular Depth Estimation☆8,006Jul 17, 2024Updated last year
- Depth Any Video with Scalable Synthetic Data (ICLR 2025)☆510Dec 4, 2024Updated last year
- An open-source project dedicated to tracking and segmenting any objects in videos, either automatically or interactively. The primary alg…☆3,106Feb 22, 2026Updated last week
- [CVPR 2024 Highlight] Official PyTorch implementation of SpatialTracker: Tracking Any 2D Pixels in 3D Space☆1,038Aug 8, 2025Updated 6 months ago
- Run Segment Anything Model 2 on a live video stream☆566Jun 3, 2025Updated 8 months ago
- State-of-the-art Image & Video CLIP, Multimodal Large Language Models, and More!☆2,177Feb 11, 2026Updated 2 weeks ago
- [NeurIPS 2024] Neural Localizer Fields for Continuous 3D Human Pose and Shape Estimation☆422May 22, 2025Updated 9 months ago
- RepViT: Revisiting Mobile CNN From ViT Perspective [CVPR 2024] and RepViT-SAM: Towards Real-Time Segmenting Anything☆1,065Jun 14, 2024Updated last year