encord-team / text-to-image-evalLinks
Evaluate custom and HuggingFace text-to-image/zero-shot-image-classification models like CLIP, SigLIP, DFN5B, and EVA-CLIP. Metrics include Zero-shot accuracy, Linear Probe, Image retrieval, and KNN accuracy.
☆56Updated 10 months ago
Alternatives and similar repositories for text-to-image-eval
Users that are interested in text-to-image-eval are comparing it to the libraries listed below
Sorting:
- ☆78Updated 2 weeks ago
- A minimal implementation of LLaVA-style VLM with interleaved image & text & video processing ability.☆97Updated 11 months ago
- The official repo for the paper "VeCLIP: Improving CLIP Training via Visual-enriched Captions"☆248Updated 10 months ago
- Which model is the best at object detection? Which is best for small or large objects? We compare the results in a handy leaderboard.☆93Updated last week
- Code from the paper "Roboflow100-VL: A Multi-Domain Object Detection Benchmark for Vision-Language Models"☆108Updated last month
- Estimate dataset difficulty and detect label mistakes using reconstruction error ratios!☆27Updated 11 months ago
- ☆59Updated last year
- Notebooks for fine tuning pali gemma☆117Updated 7 months ago
- Solving Computer Vision with AI agents☆34Updated 5 months ago
- Timm model explorer☆42Updated last year
- From scratch implementation of a vision language model in pure PyTorch☆251Updated last year
- Quick exploration into fine tuning florence 2☆335Updated last year
- auto_labeler - An all-in-one library to automatically label vision data☆20Updated 10 months ago
- Fine-tuning OpenAI CLIP Model for Image Search on medical images☆77Updated 3 years ago
- AI assistant that can query visual datasets, search the FiftyOne docs, and answer general computer vision questions☆250Updated last year
- A high-performance library for detecting objects in images and videos, leveraging Rust's speed and safety. Optionally supports a gRPC API…☆32Updated 7 months ago
- Supercharge Your PyTorch Image Models: Bag of Tricks to 8x Faster Inference with ONNX Runtime & Optimizations☆23Updated last year
- YOLOExplorer : Iterate on your YOLO / CV datasets using SQL, Vector semantic search, and more within seconds☆137Updated last week
- Use Florence 2 to auto-label data for use in training fine-tuned object detection models.☆67Updated last year
- Chat with Phi 3.5/3 Vision LLMs. Phi-3.5-vision is a lightweight, state-of-the-art open multimodal model built upon datasets which includ…☆34Updated 11 months ago
- A tool for converting computer vision label formats.☆80Updated last week
- Evaluate the performance of computer vision models and prompts for zero-shot models (Grounding DINO, CLIP, BLIP, DINOv2, ImageBind, model…☆37Updated 2 years ago
- Code for replicating Roboflow 100 benchmark results and programmatically downloading benchmark datasets☆284Updated last year
- ☆43Updated last year
- An ONNX-based implementation of the CLIP model that doesn't depend on torch or torchvision.☆76Updated last year
- Parameter-efficient finetuning script for Phi-3-vision, the strong multimodal language model by Microsoft.☆58Updated last year
- (WACV 2025 - Oral) Vision-language conversation in 10 languages including English, Chinese, French, Spanish, Russian, Japanese, Arabic, H…☆84Updated 4 months ago
- Use Grounding DINO, Segment Anything, and GPT-4V to label images with segmentation masks for use in training smaller, fine-tuned models.☆65Updated 2 years ago
- A FiftyOne Plugin that allows you to search across any modality in your videos!☆22Updated 6 months ago
- a family of highly capabale yet efficient large multimodal models☆191Updated last year