encord-team / text-to-image-eval
Evaluate custom and HuggingFace text-to-image/zero-shot-image-classification models like CLIP, SigLIP, DFN5B, and EVA-CLIP. Metrics include Zero-shot accuracy, Linear Probe, Image retrieval, and KNN accuracy.
☆50Updated 3 months ago
Alternatives and similar repositories for text-to-image-eval:
Users that are interested in text-to-image-eval are comparing it to the libraries listed below
- A minimal implementation of LLaVA-style VLM with interleaved image & text & video processing ability.☆91Updated 4 months ago
- Estimate dataset difficulty and detect label mistakes using reconstruction error ratios!☆24Updated 3 months ago
- ☆69Updated last month
- Run zero-shot prediction models on your data☆32Updated 4 months ago
- ☆58Updated last year
- Evaluate the performance of computer vision models and prompts for zero-shot models (Grounding DINO, CLIP, BLIP, DINOv2, ImageBind, model…☆35Updated last year
- The official repo for the paper "VeCLIP: Improving CLIP Training via Visual-enriched Captions"☆244Updated 3 months ago
- auto_labeler - An all-in-one library to automatically label vision data☆14Updated 3 months ago
- ☆68Updated 10 months ago
- ☆43Updated 7 months ago
- An open-source implementaion for fine-tuning SmolVLM.☆26Updated last week
- Notebooks for fine tuning pali gemma☆101Updated 3 weeks ago
- Parameter-efficient finetuning script for Phi-3-vision, the strong multimodal language model by Microsoft.☆58Updated 10 months ago
- Fine-tuning OpenAI CLIP Model for Image Search on medical images☆76Updated 3 years ago
- The most impactful papers related to contrastive pretraining for multimodal models!☆66Updated last year
- This project is a collection of fine-tuning scripts to help researchers fine-tune Qwen 2 VL on HuggingFace datasets.☆65Updated 7 months ago
- Which model is the best at object detection? Which is best for small or large objects? We compare the results in a handy leaderboard.☆70Updated last week
- Solving Computer Vision with AI agents☆31Updated 2 weeks ago
- a family of highly capabale yet efficient large multimodal models☆179Updated 8 months ago
- PyTorch code for hierarchical k-means -- a data curation method for self-supervised learning☆152Updated 10 months ago
- ☆201Updated last year
- Repository for the paper: "TiC-CLIP: Continual Training of CLIP Models".☆102Updated 10 months ago
- LLaVA-MORE: A Comparative Study of LLMs and Visual Backbones for Enhanced Visual Instruction Tuning☆133Updated 2 weeks ago
- [CVPR 2024] Official implementation of "ViTamin: Designing Scalable Vision Models in the Vision-language Era"☆203Updated 11 months ago
- Use Florence 2 to auto-label data for use in training fine-tuned object detection models.☆63Updated 8 months ago
- From scratch implementation of a vision language model in pure PyTorch☆214Updated last year
- Quick exploration into fine tuning florence 2☆309Updated 7 months ago
- Code for experiments for "ConvNet vs Transformer, Supervised vs CLIP: Beyond ImageNet Accuracy"☆101Updated 7 months ago
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆319Updated 9 months ago
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆154Updated 7 months ago