facebookresearch / dinov2Links
PyTorch code and models for the DINOv2 self-supervised learning method.
☆11,467Updated 2 weeks ago
Alternatives and similar repositories for dinov2
Users that are interested in dinov2 are comparing it to the libraries listed below
Sorting:
- [ECCV 2024] Official implementation of the paper "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection"☆8,771Updated last year
- [NeurIPS 2023] Official implementation of the paper "Segment Everything Everywhere All at Once"☆4,700Updated last year
- Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and …☆16,825Updated 11 months ago
- An open source implementation of CLIP.☆12,487Updated 3 weeks ago
- PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO☆7,135Updated last year
- Segment Anything in High Quality [NeurIPS 2023]☆4,054Updated 2 months ago
- Fast Segment Anything☆8,041Updated last year
- Painter & SegGPT Series: Vision Foundation Models from BAAI☆2,576Updated 8 months ago
- LAVIS - A One-stop Library for Language-Vision Intelligence☆10,858Updated 9 months ago
- Automated dense category annotation engine that serves as the initial semantic labeling for the Segment Anything dataset (SA-1B).☆2,274Updated 2 years ago
- [ECCV 2024] Official implementation of the paper "Semantic-SAM: Segment and Recognize Anything at Any Granularity"☆2,708Updated last month
- PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation☆5,458Updated last year
- EVA Series: Visual Representation Fantasies from BAAI☆2,556Updated last year
- This is the official code for MobileSAM project that makes SAM lightweight for mobile applications and beyond!☆5,343Updated 9 months ago
- SAM with text prompt☆2,350Updated last week
- The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoi…☆51,634Updated 11 months ago
- Open-source and strong foundation image recognition models.☆3,390Updated 6 months ago
- An open-source project dedicated to tracking and segmenting any objects in videos, either automatically or interactively. The primary alg…☆3,033Updated last year
- Track-Anything is a flexible and interactive tool for video object tracking and segmentation, based on Segment Anything, XMem, and E2FGVI…☆6,794Updated last year
- [CVPR 2023 Highlight] InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions☆2,723Updated 5 months ago
- Efficient vision foundation models for high-resolution generation and perception.☆3,049Updated 4 months ago
- [CVPR 2024] Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data. Foundation Model for Monocular Depth Estimation☆7,704Updated last year
- Grounded SAM 2: Ground and Track Anything in Videos with Grounding DINO, Florence-2 and SAM 2☆2,696Updated last week
- ImageBind One Embedding Space to Bind Them All☆8,772Updated last year
- Code release for "Masked-attention Mask Transformer for Universal Image Segmentation"☆2,933Updated last year
- ☆11,698Updated 5 months ago
- Grounded Language-Image Pre-training☆2,484Updated last year
- Official repo for consistency models.☆6,398Updated last year
- The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained mode…☆16,724Updated 8 months ago
- An open-source framework for training large multimodal models.☆3,999Updated last year