elsevierlabs-os / clip-image-searchLinks
Fine-tuning OpenAI CLIP Model for Image Search on medical images
☆76Updated 3 years ago
Alternatives and similar repositories for clip-image-search
Users that are interested in clip-image-search are comparing it to the libraries listed below
Sorting:
- ☆58Updated last year
- Evaluate the performance of computer vision models and prompts for zero-shot models (Grounding DINO, CLIP, BLIP, DINOv2, ImageBind, model…☆36Updated last year
- Official code repository for paper: "ExPLoRA: Parameter-Efficient Extended Pre-training to Adapt Vision Transformers under Domain Shifts"☆31Updated 9 months ago
- A tiny package supporting distributed computation of COCO metrics for PyTorch models.☆15Updated 2 years ago
- Simplify Your Visual Data Ops. Find and visualize issues with your computer vision datasets such as duplicates, anomalies, data leakage, …☆70Updated 2 months ago
- Use Grounding DINO, Segment Anything, and CLIP to label objects in images.☆31Updated last year
- ☆86Updated last year
- A component that allows you to annotate an image with points and boxes.☆21Updated last year
- ☆68Updated last year
- ☆76Updated 9 months ago
- ☆15Updated 11 months ago
- ☆64Updated last year
- GroundedSAM Base Model plugin for Autodistill☆51Updated last year
- ☆133Updated last year
- Load any clip model with a standardized interface☆21Updated last year
- Minimal sharded dataset loaders, decoders, and utils for multi-modal document, image, and text datasets.☆159Updated last year
- Fine tuning OpenAI's CLIP model on Indian Fashion Dataset☆50Updated 2 years ago
- Repository for the paper: "TiC-CLIP: Continual Training of CLIP Models".☆102Updated last year
- Video descriptions of research papers relating to foundation models and scaling☆31Updated 2 years ago
- (WACV 2025 - Oral) Vision-language conversation in 10 languages including English, Chinese, French, Spanish, Russian, Japanese, Arabic, H…☆84Updated 5 months ago
- Code for experiments for "ConvNet vs Transformer, Supervised vs CLIP: Beyond ImageNet Accuracy"☆101Updated 10 months ago
- Pixel Parsing. A reproduction of OCR-free end-to-end document understanding models with open data☆21Updated 11 months ago
- Command-line tool for extracting DINO, CLIP, and SigLIP2 features for images and videos☆26Updated 3 weeks ago
- Evaluation and dataset construction code for the CVPR 2025 paper "Vision-Language Models Do Not Understand Negation"☆27Updated 2 months ago
- Timm model explorer☆40Updated last year
- The official repo for the paper "VeCLIP: Improving CLIP Training via Visual-enriched Captions"☆244Updated 5 months ago
- ☆75Updated 2 weeks ago
- Notebooks for fine tuning pali gemma☆111Updated 3 months ago
- ☆33Updated 2 years ago
- ☆10Updated 2 years ago