shashnkvats / Indofashionclip
Fine tuning OpenAI's CLIP model on Indian Fashion Dataset
☆51Updated last year
Alternatives and similar repositories for Indofashionclip:
Users that are interested in Indofashionclip are comparing it to the libraries listed below
- A component that allows you to annotate an image with points and boxes.☆19Updated last year
- Finetuning CLIP on a small image/text dataset using huggingface libs☆46Updated 2 years ago
- Image Prompter for Gradio☆85Updated last year
- This is a public repository for Image Clustering Conditioned on Text Criteria (IC|TC)☆85Updated last year
- Fine-tuning OpenAI CLIP Model for Image Search on medical images☆76Updated 2 years ago
- Holds code for our CVPR'23 tutorial: All Things ViTs: Understanding and Interpreting Attention in Vision.☆185Updated last year
- Estimate dataset difficulty and detect label mistakes using reconstruction error ratios!☆24Updated 2 months ago
- Codebase for the Recognize Anything Model (RAM)☆75Updated last year
- [ICLR 2024] Official code for the paper "LLM Blueprint: Enabling Text-to-Image Generation with Complex and Detailed Prompts"☆73Updated 10 months ago
- FInetuning CLIP for Few Shot Learning☆40Updated 3 years ago
- Image/Instance Retrieval using CLIP, A self supervised Learning Model☆28Updated last year
- Projects based on SigLIP (Zhai et. al, 2023) and Hugging Face transformers integration 🤗☆223Updated last month
- [NeurIPS2022] This is the official implementation of the paper "Expediting Large-Scale Vision Transformer for Dense Prediction without Fi…☆83Updated last year
- [ICCV2023] Segment Every Reference Object in Spatial and Temporal Spaces☆237Updated last month
- [CVPR 2023 (Highlight)] FAME-ViL: Multi-Tasking V+L Model for Heterogeneous Fashion Tasks☆53Updated last year
- Fine-tuning code for CLIP models☆212Updated 2 weeks ago
- Data release for the ImageInWords (IIW) paper.☆209Updated 4 months ago
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆315Updated 8 months ago
- Repository for the paper: "TiC-CLIP: Continual Training of CLIP Models".☆102Updated 9 months ago
- Few shot recognition using CLIP's OpenAI architecture.☆36Updated 3 years ago
- ☆68Updated 9 months ago
- GroundedSAM Base Model plugin for Autodistill☆49Updated 11 months ago
- This is the official repository for the paper "OpenFashionCLIP: Vision-and-Language Contrastive Learning with Open-Source Fashion Data". …☆63Updated 10 months ago
- EdgeSAM model for use with Autodistill.☆26Updated 9 months ago
- CVPR2023 paper☆50Updated last year
- ClickDiffusion: Harnessing LLMs for Interactive Precise Image Editing☆67Updated 10 months ago
- Evaluate the performance of computer vision models and prompts for zero-shot models (Grounding DINO, CLIP, BLIP, DINOv2, ImageBind, model…☆35Updated last year
- [CVPR 24] The repository provides code for running inference and training for "Segment and Caption Anything" (SCA) , links for downloadin…☆217Updated 5 months ago
- Code base of SynthCLIP: CLIP training with purely synthetic text-image pairs from LLMs and TTIs.☆97Updated this week
- Code for our ICLR 2024 paper "PerceptionCLIP: Visual Classification by Inferring and Conditioning on Contexts"☆77Updated 10 months ago