Aasthaengg / GLIP-BLIP-Vision-Langauge-Obj-Det-VQALinks
☆32Updated 3 years ago
Alternatives and similar repositories for GLIP-BLIP-Vision-Langauge-Obj-Det-VQA
Users that are interested in GLIP-BLIP-Vision-Langauge-Obj-Det-VQA are comparing it to the libraries listed below
Sorting:
- Simplify Your Visual Data Ops. Find and visualize issues with your computer vision datasets such as duplicates, anomalies, data leakage, …☆69Updated 5 months ago
- Evaluate the performance of computer vision models and prompts for zero-shot models (Grounding DINO, CLIP, BLIP, DINOv2, ImageBind, model…☆37Updated 2 years ago
- ☆134Updated 2 years ago
- Code for experiments for "ConvNet vs Transformer, Supervised vs CLIP: Beyond ImageNet Accuracy"☆101Updated last year
- ☆87Updated last year
- Implementation of MaMMUT, a simple vision-encoder text-decoder architecture for multimodal tasks from Google, in Pytorch☆102Updated 2 years ago
- Fine-tuning OpenAI CLIP Model for Image Search on medical images☆76Updated 3 years ago
- 1st Place Solution in Google Universal Image Embedding☆67Updated 2 years ago
- 4th place solution for the Google Universal Image Embedding Kaggle Challenge. Instance-Level Recognition workshop at ECCV 2022☆43Updated 2 years ago
- A simple wrapper library for binding timm models as detectron2 backbones☆44Updated 2 years ago
- Timm model explorer☆42Updated last year
- A tiny package supporting distributed computation of COCO metrics for PyTorch models.☆15Updated 2 years ago
- PyTorch Implementation of Object Recognition as Next Token Prediction [CVPR'24 Highlight]☆180Updated 5 months ago
- Code from the paper "Roboflow100-VL: A Multi-Domain Object Detection Benchmark for Vision-Language Models"☆92Updated 2 weeks ago
- Official repository for the General Robust Image Task (GRIT) Benchmark☆54Updated 2 years ago
- Official implementation of "Active Image Indexing"☆59Updated 2 years ago
- The official repo for the paper "VeCLIP: Improving CLIP Training via Visual-enriched Captions"☆246Updated 9 months ago
- A minimal implementation of LLaVA-style VLM with interleaved image & text & video processing ability.☆96Updated 10 months ago
- A task-agnostic vision-language architecture as a step towards General Purpose Vision☆92Updated 4 years ago
- ☆65Updated 2 years ago
- Code for AAAI 2023 Paper : “Alignment-Enriched Tuning for Patch-Level Pre-trained Document Image Models”☆18Updated 2 years ago
- Vision-oriented multimodal AI☆49Updated last year
- CLIP Object Detection, search object on image using natural language #Zeroshot #Unsupervised #CLIP #ODS☆140Updated 3 years ago
- ☆59Updated last year
- Easiest way of fine-tuning HuggingFace video classification models☆145Updated 2 years ago
- Simple Implementation of Pix2Seq model for object detection in PyTorch☆128Updated 2 years ago
- TF2 implementation of knowledge distillation using the "function matching" hypothesis from https://arxiv.org/abs/2106.05237.☆88Updated 4 years ago
- Minimal sharded dataset loaders, decoders, and utils for multi-modal document, image, and text datasets.☆159Updated last year
- Exploration of the multi modal fuyu-8b model of Adept. 🤓 🔍☆27Updated last year
- Projects based on SigLIP (Zhai et. al, 2023) and Hugging Face transformers integration 🤗☆280Updated 8 months ago