shashnkvats / Indofashionclip
Fine tuning OpenAI's CLIP model on Indian Fashion Dataset
☆45Updated last year
Related projects: ⓘ
- Fine-tuning OpenAI CLIP Model for Image Search on medical images☆73Updated 2 years ago
- [CVPR 24] The repository provides code for running inference and training for "Segment and Caption Anything" (SCA) , links for downloadin…☆178Updated 3 weeks ago
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆275Updated 2 months ago
- Data release for the ImageInWords (IIW) paper.☆194Updated 3 months ago
- GroundedSAM Base Model plugin for Autodistill☆43Updated 5 months ago
- ClickDiffusion: Harnessing LLMs for Interactive Precise Image Editing☆65Updated 4 months ago
- This is a public repository for Image Clustering Conditioned on Text Criteria (IC|TC)☆74Updated 6 months ago
- ☆55Updated 3 months ago
- a family of highly capabale yet efficient large multimodal models☆155Updated 3 weeks ago
- Use Florence 2 to auto-label data for use in training fine-tuned object detection models.☆54Updated last month
- Official code of "EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model"☆242Updated 2 weeks ago
- Famous Vision Language Models and Their Architectures☆295Updated last week
- A simple script that reads a directory of videos, grabs a random frame, and automatically discovers a prompt for it☆130Updated 7 months ago
- Implementation of PALI3 from the paper PALI-3 VISION LANGUAGE MODELS: SMALLER, FASTER, STRONGER"☆138Updated last week
- VCoder: Versatile Vision Encoders for Multimodal Large Language Models, arXiv 2023 / CVPR 2024☆255Updated 5 months ago
- Evaluate the performance of computer vision models and prompts for zero-shot models (Grounding DINO, CLIP, BLIP, DINOv2, ImageBind, model…☆33Updated 11 months ago
- Projects based on SigLIP (Zhai et. al, 2023) and Hugging Face transformers integration 🤗☆120Updated 8 months ago
- Video-LlaVA fine-tune for CinePile evaluation☆33Updated last month
- EILEV: Efficient In-Context Learning in Vision-Language Models for Egocentric Videos☆108Updated 3 months ago
- Image Prompter for Gradio☆66Updated 9 months ago
- RobustSAM: Segment Anything Robustly on Degraded Images (CVPR 2024 Highlight)☆274Updated 3 weeks ago
- Official Repository of paper VideoGPT+: Integrating Image and Video Encoders for Enhanced Video Understanding☆188Updated last month
- Codebase for the Recognize Anything Model (RAM)☆58Updated 9 months ago
- [ICCV2023] Segment Every Reference Object in Spatial and Temporal Spaces☆235Updated 8 months ago
- (WACV 2025) Vision-language conversation in 10 languages including English, Chinese, French, Spanish, Russian, Japanese, Arabic, Hindi, B…☆77Updated last week
- Object Recognition as Next Token Prediction (CVPR 2024)☆153Updated 2 months ago
- This is the official repository of our paper "What If We Recaption Billions of Web Images with LLaMA-3 ?"☆115Updated 3 months ago
- Holds code for our CVPR'23 tutorial: All Things ViTs: Understanding and Interpreting Attention in Vision.☆166Updated last year
- SlimSAM: 0.1% Data Makes Segment Anything Slim☆270Updated 5 months ago
- Image Editing Anything☆112Updated last year