Aasthaengg / GLIP-BLIP-Vision-Langauge-Obj-Det-VQALinks
☆33Updated 2 years ago
Alternatives and similar repositories for GLIP-BLIP-Vision-Langauge-Obj-Det-VQA
Users that are interested in GLIP-BLIP-Vision-Langauge-Obj-Det-VQA are comparing it to the libraries listed below
Sorting:
- ☆86Updated last year
- Evaluate the performance of computer vision models and prompts for zero-shot models (Grounding DINO, CLIP, BLIP, DINOv2, ImageBind, model…☆36Updated last year
- Implementation of MaMMUT, a simple vision-encoder text-decoder architecture for multimodal tasks from Google, in Pytorch☆103Updated last year
- ☆133Updated last year
- Simplify Your Visual Data Ops. Find and visualize issues with your computer vision datasets such as duplicates, anomalies, data leakage, …☆70Updated 3 months ago
- Code for experiments for "ConvNet vs Transformer, Supervised vs CLIP: Beyond ImageNet Accuracy"☆101Updated 11 months ago
- [CVPR'24 Highlight] PyTorch Implementation of Object Recognition as Next Token Prediction☆180Updated 3 months ago
- ☆65Updated last year
- A task-agnostic vision-language architecture as a step towards General Purpose Vision☆92Updated 4 years ago
- A minimal implementation of LLaVA-style VLM with interleaved image & text & video processing ability.☆94Updated 7 months ago
- Official repository for the General Robust Image Task (GRIT) Benchmark☆54Updated 2 years ago
- Fine-tuning OpenAI CLIP Model for Image Search on medical images☆76Updated 3 years ago
- EdgeSAM model for use with Autodistill.☆27Updated last year
- A simple wrapper library for binding timm models as detectron2 backbones☆43Updated 2 years ago
- Vision-oriented multimodal AI☆49Updated last year
- [CVPR 2023 Highlight] Beyond mAP: Towards better evaluation of instance segmentation☆27Updated 2 years ago
- Timm model explorer☆41Updated last year
- Official code repository for ICML 2025 paper: "ExPLoRA: Parameter-Efficient Extended Pre-training to Adapt Vision Transformers under Doma…☆38Updated 3 weeks ago
- Implementation of TableFormer, Robust Transformer Modeling for Table-Text Encoding, in Pytorch☆39Updated 3 years ago
- Easiest way of fine-tuning HuggingFace video classification models☆142Updated 2 years ago
- A tiny package supporting distributed computation of COCO metrics for PyTorch models.☆15Updated 2 years ago
- ☆59Updated last year
- The official repo for the paper "VeCLIP: Improving CLIP Training via Visual-enriched Captions"☆246Updated 6 months ago
- ALIGN trained on COYO-dataset☆29Updated last year
- [BMVC22] Official Implementation of ViCHA: "Efficient Vision-Language Pretraining with Visual Concepts and Hierarchical Alignment"☆55Updated 2 years ago
- CLIP Object Detection, search object on image using natural language #Zeroshot #Unsupervised #CLIP #ODS☆138Updated 3 years ago
- Democratization of "PaLI: A Jointly-Scaled Multilingual Language-Image Model"☆92Updated last year
- Run zero-shot prediction models on your data☆33Updated 7 months ago
- This repo contains documentation and code needed to use PACO dataset: data loaders and training and evaluation scripts for objects, parts…☆287Updated last year
- Code from the paper "Roboflow100-VL: A Multi-Domain Object Detection Benchmark for Vision-Language Models"☆74Updated 2 months ago