elsevierlabs-os / clip-image-search
Fine-tuning OpenAI CLIP Model for Image Search on medical images
☆76Updated 3 years ago
Alternatives and similar repositories for clip-image-search
Users that are interested in clip-image-search are comparing it to the libraries listed below
Sorting:
- ☆58Updated last year
- Evaluate the performance of computer vision models and prompts for zero-shot models (Grounding DINO, CLIP, BLIP, DINOv2, ImageBind, model…☆35Updated last year
- Generalised Contrastive Learning. This is a Repository for Google Shopping Dataset and Benchmarks followed by our novel fine-grained cont…☆62Updated last month
- ☆64Updated last year
- Estimate dataset difficulty and detect label mistakes using reconstruction error ratios!☆24Updated 4 months ago
- Fine tuning OpenAI's CLIP model on Indian Fashion Dataset☆51Updated last year
- PyTorch code for hierarchical k-means -- a data curation method for self-supervised learning☆153Updated 10 months ago
- (WACV 2025 - Oral) Vision-language conversation in 10 languages including English, Chinese, French, Spanish, Russian, Japanese, Arabic, H…☆84Updated 3 months ago
- Evaluate custom and HuggingFace text-to-image/zero-shot-image-classification models like CLIP, SigLIP, DFN5B, and EVA-CLIP. Metrics inclu…☆51Updated 4 months ago
- ☆224Updated 3 years ago
- ☆75Updated 7 months ago
- A component that allows you to annotate an image with points and boxes.☆20Updated last year
- ☆45Updated 4 months ago
- The official repo for the paper "VeCLIP: Improving CLIP Training via Visual-enriched Captions"☆245Updated 3 months ago
- Simplify Your Visual Data Ops. Find and visualize issues with your computer vision datasets such as duplicates, anomalies, data leakage, …☆69Updated last week
- Official code repository for paper: "ExPLoRA: Parameter-Efficient Extended Pre-training to Adapt Vision Transformers under Domain Shifts"☆31Updated 7 months ago
- Code and pretrained models for the paper: "MatMamba: A Matryoshka State Space Model"☆59Updated 5 months ago
- ☆68Updated 10 months ago
- ☆88Updated last year
- Code for experiments for "ConvNet vs Transformer, Supervised vs CLIP: Beyond ImageNet Accuracy"☆101Updated 8 months ago
- Evaluation and dataset construction code for the CVPR 2025 paper "Vision-Language Models Do Not Understand Negation"☆21Updated 3 weeks ago
- Minimal sharded dataset loaders, decoders, and utils for multi-modal document, image, and text datasets.☆157Updated last year
- Object Recognition as Next Token Prediction (CVPR 2024 Highlight)☆177Updated 2 weeks ago
- ☆63Updated 7 months ago
- A minimal implementation of LLaVA-style VLM with interleaved image & text & video processing ability.☆91Updated 5 months ago
- ☆43Updated 7 months ago
- Official code for "TOAST: Transfer Learning via Attention Steering"☆189Updated last year
- Projects based on SigLIP (Zhai et. al, 2023) and Hugging Face transformers integration 🤗☆230Updated 2 months ago
- Implementation of MaMMUT, a simple vision-encoder text-decoder architecture for multimodal tasks from Google, in Pytorch☆101Updated last year
- ☆65Updated 7 months ago