cardinalblue / clip-models-for-distillation
☆18Updated last year
Alternatives and similar repositories for clip-models-for-distillation:
Users that are interested in clip-models-for-distillation are comparing it to the libraries listed below
- [FGVC9-CVPR 2022] The second place solution for 2nd eBay eProduct Visual Search Challenge.☆26Updated 2 years ago
- Repository for the paper "Data Efficient Masked Language Modeling for Vision and Language".☆18Updated 3 years ago
- ☆11Updated 4 years ago
- Use CLIP to represent video for Retrieval Task☆69Updated 3 years ago
- Research code for "Training Vision-Language Transformers from Captions Alone"☆34Updated 2 years ago
- A non-JIT version implementation / replication of CLIP of OpenAI in pytorch☆34Updated 4 years ago
- Large-Scale Bidirectional Training for Zero-Shot Image Captioning☆21Updated 2 years ago
- ☆26Updated 3 years ago
- ☆32Updated 2 years ago
- Simple script to compute CLIP-based scores given a DALL-e trained model.☆30Updated 3 years ago
- ☆46Updated 3 years ago
- ☆27Updated 3 years ago
- [ECCV2022] Contrastive Vision-Language Pre-training with Limited Resources☆45Updated 2 years ago
- ☆47Updated 3 years ago
- MDMMT: Multidomain Multimodal Transformer for Video Retrieval☆26Updated 3 years ago
- The official implementation of InterBERT☆11Updated 2 years ago
- Implementation of our PR 2020 paper:Unsupervised Text-to-Image Synthesis☆13Updated 4 years ago
- A Pytorch implementation of Attention on Attention module (both self and guided variants), for Visual Question Answering☆41Updated 4 years ago
- CLIP-Art: Contrastive Pre-training for Fine-Grained Art Classification - 4th Workshop on Computer Vision for Fashion, Art, and Design☆27Updated 2 years ago
- A huge dataset for Document Visual Question Answering☆15Updated 6 months ago
- ☆24Updated 3 years ago
- Official code release for ARTEMIS: Attention-based Retrieval with Text-Explicit Matching and Implicit Similarity (published at ICLR 2022)☆48Updated 2 years ago
- This project provides a data set with bounding boxes, body poses, 3D face meshes & captions of people from our LAION-2.2B. Additionally i…☆13Updated 3 years ago
- ☆50Updated 2 years ago
- source code and pre-trained/fine-tuned checkpoint for NAACL 2021 paper LightningDOT☆73Updated 2 years ago
- [ICME 2022] code for the paper, SimVit: Exploring a simple vision transformer with sliding windows.☆67Updated 2 years ago
- Un-*** 50 billions multimodality dataset☆24Updated 2 years ago
- CLIP4IDC: CLIP for Image Difference Captioning (AACL 2022)☆32Updated 2 years ago
- [ICLR2024] Codes and Models for COSA: Concatenated Sample Pretrained Vision-Language Foundation Model☆41Updated last month
- ECCV2020 paper: Fashion Captioning: Towards Generating Accurate Descriptions with Semantic Rewards. Code and Data.☆84Updated last year