elsevierlabs-os / clip-image-searchLinks
Fine-tuning OpenAI CLIP Model for Image Search on medical images
☆77Updated 3 years ago
Alternatives and similar repositories for clip-image-search
Users that are interested in clip-image-search are comparing it to the libraries listed below
Sorting:
- ☆59Updated last year
- Official code repository for ICML 2025 paper: "ExPLoRA: Parameter-Efficient Extended Pre-training to Adapt Vision Transformers under Doma…☆47Updated 4 months ago
- Simplify Your Visual Data Ops. Find and visualize issues with your computer vision datasets such as duplicates, anomalies, data leakage, …☆69Updated 7 months ago
- Video descriptions of research papers relating to foundation models and scaling☆30Updated 2 years ago
- A tiny package supporting distributed computation of COCO metrics for PyTorch models.☆15Updated 2 years ago
- Minimal sharded dataset loaders, decoders, and utils for multi-modal document, image, and text datasets.☆160Updated last year
- Easily get basic insights about your ML dataset☆40Updated 2 years ago
- Timm model explorer☆42Updated last year
- ☆69Updated last year
- ☆78Updated last month
- Evaluate the performance of computer vision models and prompts for zero-shot models (Grounding DINO, CLIP, BLIP, DINOv2, ImageBind, model…☆37Updated 2 years ago
- ☆134Updated 2 years ago
- The official repo for the paper "VeCLIP: Improving CLIP Training via Visual-enriched Captions"☆248Updated 11 months ago
- ☆80Updated last year
- Notebooks to demonstrate TimmWrapper☆16Updated 11 months ago
- Repository for the paper: "TiC-CLIP: Continual Training of CLIP Models" ICLR 2024☆109Updated last year
- A minimal implementation of LLaVA-style VLM with interleaved image & text & video processing ability.☆97Updated last year
- Use Grounding DINO, Segment Anything, and CLIP to label objects in images.☆33Updated last year
- ☆43Updated last year
- ☆87Updated last year
- Implementation of the general framework for AMIE, from the paper "Towards Conversational Diagnostic AI", out of Google Deepmind☆72Updated last year
- (WACV 2025 - Oral) Vision-language conversation in 10 languages including English, Chinese, French, Spanish, Russian, Japanese, Arabic, H…☆84Updated 4 months ago
- A tool for converting computer vision label formats.☆80Updated 3 weeks ago
- Generalised Contrastive Learning. This is a Repository for Google Shopping Dataset and Benchmarks followed by our novel fine-grained cont…☆68Updated 3 months ago
- Fine tuning OpenAI's CLIP model on Indian Fashion Dataset☆52Updated 2 years ago
- Implementation of MaMMUT, a simple vision-encoder text-decoder architecture for multimodal tasks from Google, in Pytorch☆103Updated 2 years ago
- Code for experiments for "ConvNet vs Transformer, Supervised vs CLIP: Beyond ImageNet Accuracy"☆101Updated last year
- How Good is Google Bard's Visual Understanding? An Empirical Study on Open Challenges☆30Updated 2 years ago
- Official Pytorch Implementation of Self-emerging Token Labeling☆35Updated last year
- ☆234Updated 4 months ago