mikkoim / dinotoolLinks
Command-line tool for extracting DINO, CLIP, and SigLIP2 features for images and videos
☆24Updated 3 weeks ago
Alternatives and similar repositories for dinotool
Users that are interested in dinotool are comparing it to the libraries listed below
Sorting:
- Evaluate the performance of computer vision models and prompts for zero-shot models (Grounding DINO, CLIP, BLIP, DINOv2, ImageBind, model…☆36Updated last year
- Pixel Parsing. A reproduction of OCR-free end-to-end document understanding models with open data☆21Updated 11 months ago
- ☆58Updated last year
- ☆13Updated 10 months ago
- Timm model explorer☆40Updated last year
- EdgeSAM model for use with Autodistill.☆27Updated last year
- Official code repository for paper: "ExPLoRA: Parameter-Efficient Extended Pre-training to Adapt Vision Transformers under Domain Shifts"☆31Updated 9 months ago
- Code and pretrained models for the paper: "MatMamba: A Matryoshka State Space Model"☆59Updated 7 months ago
- recipe for training fully-featured self supervised image jepa models☆10Updated last month
- Official Pytorch Implementation of Self-emerging Token Labeling☆33Updated last year
- Use Grounding DINO, Segment Anything, and CLIP to label objects in images.☆31Updated last year
- A tiny package supporting distributed computation of COCO metrics for PyTorch models.☆15Updated 2 years ago
- Load any clip model with a standardized interface☆21Updated last year
- Implementation of the model: "(MC-ViT)" from the paper: "Memory Consolidation Enables Long-Context Video Understanding"☆20Updated 3 months ago
- Implementation of VisionLLaMA from the paper: "VisionLLaMA: A Unified LLaMA Interface for Vision Tasks" in PyTorch and Zeta☆16Updated 8 months ago
- Induce brain-like topographic structure in your neural networks☆62Updated last month
- A benchmark dataset and simple code examples for measuring the perception and reasoning of multi-sensor Vision Language models.☆18Updated 6 months ago
- Repository for the paper: "TiC-CLIP: Continual Training of CLIP Models".☆102Updated last year
- A tool for converting computer vision label formats.☆64Updated 2 months ago
- Notebooks to demonstrate TimmWrapper☆16Updated 5 months ago
- ☆15Updated 11 months ago
- OLA-VLM: Elevating Visual Perception in Multimodal LLMs with Auxiliary Embedding Distillation, arXiv 2024☆60Updated 4 months ago
- A list of language models with permissive licenses such as MIT or Apache 2.0☆24Updated 4 months ago
- ☆76Updated 8 months ago
- A component that allows you to annotate an image with points and boxes.☆21Updated last year
- This repository includes the code to download the curated HuggingFace papers into a single markdown formatted file☆14Updated 11 months ago
- ☆49Updated last week
- ☆74Updated 2 weeks ago
- Experimental scripts for researching data adaptive learning rate scheduling.☆23Updated last year
- Fine-tuning OpenAI CLIP Model for Image Search on medical images☆76Updated 3 years ago