encord-team / text-to-image-evalLinks
Evaluate custom and HuggingFace text-to-image/zero-shot-image-classification models like CLIP, SigLIP, DFN5B, and EVA-CLIP. Metrics include Zero-shot accuracy, Linear Probe, Image retrieval, and KNN accuracy.
☆51Updated 4 months ago
Alternatives and similar repositories for text-to-image-eval
Users that are interested in text-to-image-eval are comparing it to the libraries listed below
Sorting:
- A minimal implementation of LLaVA-style VLM with interleaved image & text & video processing ability.☆92Updated 5 months ago
- Estimate dataset difficulty and detect label mistakes using reconstruction error ratios!☆25Updated 4 months ago
- ☆58Updated last year
- ☆70Updated 2 months ago
- Run zero-shot prediction models on your data☆32Updated 5 months ago
- Fine-tuning OpenAI CLIP Model for Image Search on medical images☆75Updated 3 years ago
- Which model is the best at object detection? Which is best for small or large objects? We compare the results in a handy leaderboard.☆70Updated this week
- Evaluate the performance of computer vision models and prompts for zero-shot models (Grounding DINO, CLIP, BLIP, DINOv2, ImageBind, model…☆36Updated last year
- auto_labeler - An all-in-one library to automatically label vision data☆15Updated 4 months ago
- The official repo for the paper "VeCLIP: Improving CLIP Training via Visual-enriched Captions"☆244Updated 4 months ago
- Easily get basic insights about your ML dataset☆38Updated last year
- The most impactful papers related to contrastive pretraining for multimodal models!☆67Updated last year
- (WACV 2025 - Oral) Vision-language conversation in 10 languages including English, Chinese, French, Spanish, Russian, Japanese, Arabic, H…☆83Updated 3 months ago
- PyTorch code for hierarchical k-means -- a data curation method for self-supervised learning☆155Updated 11 months ago
- From scratch implementation of a vision language model in pure PyTorch☆220Updated last year
- Code for experiments for "ConvNet vs Transformer, Supervised vs CLIP: Beyond ImageNet Accuracy"☆101Updated 8 months ago
- Referring any person or objects given a natural language description. Code base for RexSeek and HumanRef Benchmark☆132Updated last month
- An open-source implementaion for fine-tuning Molmo-7B-D and Molmo-7B-O by allenai.☆55Updated last month
- Use Grounding DINO, Segment Anything, and CLIP to label objects in images.☆31Updated last year
- ☆43Updated 8 months ago
- Perform visual question answering on your images☆17Updated last year
- Quick exploration into fine tuning florence 2☆316Updated 8 months ago
- Python Library to evaluate VLM models' robustness across diverse benchmarks☆207Updated this week
- Timm model explorer☆39Updated last year
- Continuation of an abandoned project fast-coco-eval☆110Updated this week
- Notebooks for fine tuning pali gemma☆107Updated last month
- ☆68Updated 11 months ago
- This is implementation of finetuning BLIP model for Visual Question Answering☆68Updated last year
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆321Updated 10 months ago
- A tool for converting computer vision label formats.☆62Updated last month