cardinalblue / clip-models-for-distillation
☆18Updated last year
Alternatives and similar repositories for clip-models-for-distillation:
Users that are interested in clip-models-for-distillation are comparing it to the libraries listed below
- [FGVC9-CVPR 2022] The second place solution for 2nd eBay eProduct Visual Search Challenge.☆26Updated 2 years ago
- Use CLIP to represent video for Retrieval Task☆69Updated 4 years ago
- Repository for the paper "Data Efficient Masked Language Modeling for Vision and Language".☆18Updated 3 years ago
- ☆47Updated 3 years ago
- ☆26Updated 3 years ago
- source code and pre-trained/fine-tuned checkpoint for NAACL 2021 paper LightningDOT☆72Updated 2 years ago
- MDMMT: Multidomain Multimodal Transformer for Video Retrieval☆26Updated 3 years ago
- ☆28Updated 3 years ago
- A non-JIT version implementation / replication of CLIP of OpenAI in pytorch☆34Updated 4 years ago
- Masked Vision-Language Transformer in Fashion☆33Updated last year
- Large-Scale Bidirectional Training for Zero-Shot Image Captioning☆21Updated 2 years ago
- [NeurIPS 2021] ORL: Unsupervised Object-Level Representation Learning from Scene Images☆58Updated 3 years ago
- CLIP中文encoder☆22Updated 2 years ago
- This is the official repository for CookGAN: Meal Image Synthesis from Ingredients☆23Updated 2 years ago
- Repository for ACL2020 paper "Refer360° A Referring Expression Recognition Dataset in 360°Images"☆13Updated 3 years ago
- [ECCV2022] Contrastive Vision-Language Pre-training with Limited Resources☆44Updated 2 years ago
- Research code for "Training Vision-Language Transformers from Captions Alone"☆34Updated 2 years ago
- [ACL 2021] mTVR: Multilingual Video Moment Retrieval☆26Updated 2 years ago
- ☆24Updated 3 years ago
- ROSITA: Enhancing Vision-and-Language Semantic Alignments via Cross- and Intra-modal Knowledge Integration☆56Updated last year
- A Python toolkit for the OmniLabel benchmark providing code for evaluation and visualization☆21Updated 2 months ago
- Using pretrained encoder and language models to generate captions from multimedia inputs.☆97Updated 2 years ago
- Implementation of Cross Transformer for spatially-aware few-shot transfer, in Pytorch☆52Updated 4 years ago
- [BMVC22] Official Implementation of ViCHA: "Efficient Vision-Language Pretraining with Visual Concepts and Hierarchical Alignment"☆55Updated 2 years ago
- ☆73Updated 2 years ago
- CLIP-Art: Contrastive Pre-training for Fine-Grained Art Classification - 4th Workshop on Computer Vision for Fashion, Art, and Design☆27Updated 2 years ago
- ☆28Updated 5 years ago
- Command-line tool for downloading and extending the RedCaps dataset.☆46Updated last year
- [CVPR 2022 Challenge Rank 1st] The official code for V2L: Leveraging Vision and Vision-language Models into Large-scale Product Retrieval…☆29Updated 2 years ago
- Code for the Video Similarity Challenge.☆78Updated last year