itsprakhar / Downstream-Dinov2Links
Downstream-Dino-V2: A GitHub repository featuring an easy-to-use implementation of the DINOv2 model by Facebook for downstream tasks such as Classification, Semantic Segmentation and Monocular depth estimation.
☆247Updated 2 years ago
Alternatives and similar repositories for Downstream-Dinov2
Users that are interested in Downstream-Dinov2 are comparing it to the libraries listed below
Sorting:
- Testing adaptation of the DINOv2 encoder for vision tasks with Low-Rank Adaptation (LoRA)☆164Updated last year
- [ICLR'24] Matcher: Segment Anything with One Shot Using All-Purpose Feature Matching☆510Updated 8 months ago
- Dino V2 for Classification, PCA Visualization, Instance Retrival: https://arxiv.org/abs/2304.07193☆197Updated 2 years ago
- Fine-tune Segment-Anything Model with Lightning Fabric.☆551Updated last year
- [CVPR 2024] Official implementation of the paper "Visual In-context Learning"☆486Updated last year
- [ICCV 2023] Official implementation of the paper "A Simple Framework for Open-Vocabulary Segmentation and Detection"☆726Updated last year
- using clip and sam to segment any instance you specify with text prompt of any instance names☆176Updated 2 years ago
- [NeurIPS 2023] This repo contains the code for our paper Convolutions Die Hard: Open-Vocabulary Segmentation with Single Frozen Convoluti…☆325Updated last year
- [NeurIPS 2024] Code release for "Segment Anything without Supervision"☆479Updated last month
- [ICLR 2025 oral] RMP-SAM: Towards Real-Time Multi-Purpose Segment Anything☆259Updated 4 months ago
- Open-vocabulary Semantic Segmentation☆351Updated 10 months ago
- Finetuning DINOv2 (https://github.com/facebookresearch/dinov2) on your own dataset☆62Updated 2 years ago
- Training and testing of DINOv2 for segmentation downstream☆38Updated 6 months ago
- SimpleClick: Interactive Image Segmentation with Simple Vision Transformers (ICCV 2023)☆239Updated last week
- Experiment on combining CLIP with SAM to do open-vocabulary image segmentation.☆376Updated 2 years ago
- Connecting segment-anything's output masks with the CLIP model; Awesome-Segment-Anything-Works☆196Updated 10 months ago
- 🌌 Fine tune specific SAM model on any task☆200Updated 10 months ago
- The repository provides code for training/fine tune the Meta Segment Anything Model 2 (SAM 2)☆247Updated 11 months ago
- Simple Finetuning Starter Code for Segment Anything☆136Updated 2 years ago
- [CVPR 2024] Official implementation of "VRP-SAM: SAM with Visual Reference Prompt"☆153Updated 10 months ago
- [Pattern Recognition 25] CLIP Surgery for Better Explainability with Enhancement in Open-Vocabulary Tasks☆439Updated 5 months ago
- Code release for paper "You Only Segment Once: Towards Real-Time Panoptic Segmentation" [CVPR 2023]☆277Updated 2 years ago
- [CVPR2024] Official Pytorch Implementation of SED: A Simple Encoder-Decoder for Open-Vocabulary Semantic Segmentation.☆177Updated last year
- This is Pytorch Implementation Code for adding new features in code of Segment-Anything. Here, the features support batch-input on the fu…☆159Updated last year
- CoRL 2024☆426Updated 9 months ago
- Fine-tune SAM (Segment Anything Model) for computer vision tasks such as semantic segmentation, matting, detection ... in specific scena…☆844Updated 2 years ago
- A collection of project, papers, and source code for Meta AI's Segment Anything Model (SAM) and related studies.☆357Updated 8 months ago
- This repository is for the first comprehensive survey on Meta AI's Segment Anything Model (SAM).☆992Updated this week
- Includes the code for training and testing the CountGD model from the paper CountGD: Multi-Modal Open-World Counting.☆271Updated last month
- One summary of efficient segment anything models☆105Updated last year