cardinalblue / clip-models-for-distillationLinks
☆18Updated 2 years ago
Alternatives and similar repositories for clip-models-for-distillation
Users that are interested in clip-models-for-distillation are comparing it to the libraries listed below
Sorting:
- source code and pre-trained/fine-tuned checkpoint for NAACL 2021 paper LightningDOT☆72Updated 3 years ago
- ☆28Updated 4 years ago
- Use CLIP to represent video for Retrieval Task☆70Updated 4 years ago
- ☆26Updated 4 years ago
- ECCV2020 paper: Fashion Captioning: Towards Generating Accurate Descriptions with Semantic Rewards. Code and Data.☆85Updated 2 years ago
- ☆48Updated 4 years ago
- ☆11Updated 5 years ago
- Research code for "Training Vision-Language Transformers from Captions Alone"☆34Updated 3 years ago
- Official code for WACV 2021 paper - Compositional Learning of Image-Text Query for Image Retrieval☆56Updated 4 years ago
- A unified framework to jointly model images, text, and human attention traces.☆79Updated 4 years ago
- ☆72Updated 2 years ago
- A non-JIT version implementation / replication of CLIP of OpenAI in pytorch☆34Updated 4 years ago
- ☆24Updated 4 years ago
- The official implementation of InterBERT☆11Updated 3 years ago
- [FGVC9-CVPR 2022] The second place solution for 2nd eBay eProduct Visual Search Challenge.☆26Updated 3 years ago
- MDMMT: Multidomain Multimodal Transformer for Video Retrieval☆26Updated 4 years ago
- Repository for the paper "Data Efficient Masked Language Modeling for Vision and Language".☆18Updated 4 years ago
- [ICME 2022] code for the paper, SimVit: Exploring a simple vision transformer with sliding windows.☆68Updated 3 years ago
- Using pretrained encoder and language models to generate captions from multimedia inputs.☆97Updated 2 years ago
- ☆20Updated 4 years ago
- [ECCV2022] Contrastive Vision-Language Pre-training with Limited Resources☆45Updated 3 years ago
- Implementation of OmniNet, Omnidirectional Representations from Transformers, in Pytorch☆59Updated 4 years ago
- Phrase Localization Evaluation Toolkit☆20Updated 6 years ago
- CLIP4IDC: CLIP for Image Difference Captioning (AACL 2022)☆36Updated 3 years ago
- ☆131Updated 2 years ago
- ROSITA: Enhancing Vision-and-Language Semantic Alignments via Cross- and Intra-modal Knowledge Integration☆56Updated 2 years ago
- ☆32Updated 3 years ago
- This project provides a data set with bounding boxes, body poses, 3D face meshes & captions of people from our LAION-2.2B. Additionally i…☆14Updated 3 years ago
- Implementation of our PR 2020 paper:Unsupervised Text-to-Image Synthesis☆13Updated 5 years ago
- ☆28Updated 5 years ago