elsevierlabs-os / clip-image-search
Fine-tuning OpenAI CLIP Model for Image Search on medical images
☆76Updated 2 years ago
Alternatives and similar repositories for clip-image-search:
Users that are interested in clip-image-search are comparing it to the libraries listed below
- ☆58Updated last year
- Evaluate the performance of computer vision models and prompts for zero-shot models (Grounding DINO, CLIP, BLIP, DINOv2, ImageBind, model…☆35Updated last year
- ☆131Updated last year
- Estimate dataset difficulty and detect label mistakes using reconstruction error ratios!☆24Updated 2 months ago
- Official code repository for paper: "ExPLoRA: Parameter-Efficient Extended Pre-training to Adapt Vision Transformers under Domain Shifts"☆31Updated 5 months ago
- ☆64Updated 5 months ago
- Simplify Your Visual Data Ops. Find and visualize issues with your computer vision datasets such as duplicates, anomalies, data leakage, …☆67Updated last year
- A tool for converting computer vision label formats.☆61Updated 2 weeks ago
- Timm model explorer☆37Updated 11 months ago
- Use Grounding DINO, Segment Anything, and CLIP to label objects in images.☆30Updated last year
- ☆68Updated 9 months ago
- The official repo for the paper "VeCLIP: Improving CLIP Training via Visual-enriched Captions"☆242Updated 2 months ago
- Code for experiments for "ConvNet vs Transformer, Supervised vs CLIP: Beyond ImageNet Accuracy"☆101Updated 6 months ago
- Generalised Contrastive Learning. This is a Repository for Google Shopping Dataset and Benchmarks followed by our novel fine-grained cont…☆58Updated last month
- Implementation of the general framework for AMIE, from the paper "Towards Conversational Diagnostic AI", out of Google Deepmind☆59Updated 6 months ago
- GroundedSAM Base Model plugin for Autodistill☆49Updated 11 months ago
- Repository for the paper: "TiC-CLIP: Continual Training of CLIP Models".☆102Updated 9 months ago
- ☆64Updated last year
- Projects based on SigLIP (Zhai et. al, 2023) and Hugging Face transformers integration 🤗☆224Updated last month
- Fine tuning OpenAI's CLIP model on Indian Fashion Dataset☆51Updated last year
- ☆44Updated 2 months ago
- ☆63Updated 6 months ago
- Implementation of MaMMUT, a simple vision-encoder text-decoder architecture for multimodal tasks from Google, in Pytorch☆100Updated last year
- [ACM TOMM 2023] - Composed Image Retrieval using Contrastive Learning and Task-oriented CLIP-based Features☆175Updated last year
- Pixel Parsing. A reproduction of OCR-free end-to-end document understanding models with open data☆21Updated 8 months ago
- A minimal implementation of LLaVA-style VLM with interleaved image & text & video processing ability.☆90Updated 3 months ago
- ☆74Updated 5 months ago
- Notebooks for fine tuning pali gemma☆98Updated 3 months ago
- (WACV 2025 - Oral) Vision-language conversation in 10 languages including English, Chinese, French, Spanish, Russian, Japanese, Arabic, H…☆84Updated last month
- Easily get basic insights about your ML dataset☆35Updated last year