itsprakhar / Downstream-Dinov2Links
Downstream-Dino-V2: A GitHub repository featuring an easy-to-use implementation of the DINOv2 model by Facebook for downstream tasks such as Classification, Semantic Segmentation and Monocular depth estimation.
☆244Updated 2 years ago
Alternatives and similar repositories for Downstream-Dinov2
Users that are interested in Downstream-Dinov2 are comparing it to the libraries listed below
Sorting:
- [ICLR'24] Matcher: Segment Anything with One Shot Using All-Purpose Feature Matching☆503Updated 7 months ago
- Testing adaptation of the DINOv2 encoder for vision tasks with Low-Rank Adaptation (LoRA)☆155Updated 11 months ago
- Dino V2 for Classification, PCA Visualization, Instance Retrival: https://arxiv.org/abs/2304.07193☆193Updated 2 years ago
- Fine-tune Segment-Anything Model with Lightning Fabric.☆552Updated last year
- [CVPR 2024] Official implementation of the paper "Visual In-context Learning"☆482Updated last year
- [CVPR 2024] Official implementation of "VRP-SAM: SAM with Visual Reference Prompt"☆147Updated 9 months ago
- using clip and sam to segment any instance you specify with text prompt of any instance names☆175Updated 2 years ago
- [NeurIPS 2024] Code release for "Segment Anything without Supervision"☆476Updated last month
- [NeurIPS 2023] This repo contains the code for our paper Convolutions Die Hard: Open-Vocabulary Segmentation with Single Frozen Convoluti…☆322Updated last year
- [ICLR 2025 oral] RMP-SAM: Towards Real-Time Multi-Purpose Segment Anything☆256Updated 3 months ago
- [ICCV 2023] Official implementation of the paper "A Simple Framework for Open-Vocabulary Segmentation and Detection"☆722Updated last year
- Open-vocabulary Semantic Segmentation☆351Updated 9 months ago
- Finetuning DINOv2 (https://github.com/facebookresearch/dinov2) on your own dataset☆60Updated 2 years ago
- Connecting segment-anything's output masks with the CLIP model; Awesome-Segment-Anything-Works☆197Updated 9 months ago
- [CVPR2024] Official Pytorch Implementation of SED: A Simple Encoder-Decoder for Open-Vocabulary Semantic Segmentation.☆177Updated last year
- CoRL 2024☆422Updated 8 months ago
- This is Pytorch Implementation Code for adding new features in code of Segment-Anything. Here, the features support batch-input on the fu…☆156Updated last year
- Experiment on combining CLIP with SAM to do open-vocabulary image segmentation.☆374Updated 2 years ago
- [Pattern Recognition 25] CLIP Surgery for Better Explainability with Enhancement in Open-Vocabulary Tasks☆434Updated 4 months ago
- [CVPR 2024] Code for "Improving the Generalization of Segmentation Foundation Model under Distribution Shift via Weakly Supervised Adapta…☆171Updated 11 months ago
- We developed a python UI based on labelme and segment-anything for pixel-level annotation. It support multiple masks generation by SAM(bo…☆378Updated last year
- A collection of project, papers, and source code for Meta AI's Segment Anything Model (SAM) and related studies.☆357Updated 7 months ago
- Fine-tune SAM (Segment Anything Model) for computer vision tasks such as semantic segmentation, matting, detection ... in specific scena…☆841Updated last year
- The repository provides code for training/fine tune the Meta Segment Anything Model 2 (SAM 2)☆247Updated 10 months ago
- [CVPR2023] FastInst: A Simple Query-Based Model for Real-Time Instance Segmentation☆206Updated last year
- One summary of efficient segment anything models☆104Updated 11 months ago
- A curated publication list on open vocabulary semantic segmentation and related area (e.g. zero-shot semantic segmentation) resources..☆679Updated 3 months ago
- Official PyTorch implementation of "Extract Free Dense Labels from CLIP" (ECCV 22 Oral)☆456Updated 2 years ago
- [CVPR 2024] Official implement of <Stronger, Fewer, & Superior: Harnessing Vision Foundation Models for Domain Generalized Semantic Segme…☆356Updated last month
- SimpleClick: Interactive Image Segmentation with Simple Vision Transformers (ICCV 2023)☆239Updated 2 weeks ago