uta-smile / TCLView external linksLinks
code for TCL: Vision-Language Pre-Training with Triple Contrastive Learning, CVPR 2022
☆268Oct 2, 2024Updated last year
Alternatives and similar repositories for TCL
Users that are interested in TCL are comparing it to the libraries listed below
Sorting:
- Code for ALBEF: a new vision-language pre-training method☆1,752Sep 20, 2022Updated 3 years ago
- X-VLM: Multi-Grained Vision Language Pre-Training (ICML 2022)☆490Nov 25, 2022Updated 3 years ago
- Align and Prompt: Video-and-Language Pre-training with Entity Prompts☆188May 1, 2025Updated 9 months ago
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆673Sep 19, 2022Updated 3 years ago
- Code release for SLIP Self-supervision meets Language-Image Pre-training☆787Feb 9, 2023Updated 3 years ago
- [CVPR'21 Oral] Seeing Out of tHe bOx: End-to-End Pre-training for Vision-Language Representation Learning☆208Sep 30, 2022Updated 3 years ago
- The Pytorch implementation for "Video-Text Pre-training with Learned Regions"☆42Jul 15, 2022Updated 3 years ago
- [CVPR2023] All in One: Exploring Unified Video-Language Pre-training☆281Mar 25, 2023Updated 2 years ago
- Frozen in Time: A Joint Video and Image Encoder for End-to-End Retrieval [ICCV'21]☆380May 19, 2022Updated 3 years ago
- METER: A Multimodal End-to-end TransformER Framework☆375Nov 16, 2022Updated 3 years ago
- PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks" (CVPR2022)☆209Dec 18, 2022Updated 3 years ago
- Code for the ICML 2021 (long talk) paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"☆1,524Apr 3, 2024Updated last year
- [ICLR 2022] code for "How Much Can CLIP Benefit Vision-and-Language Tasks?" https://arxiv.org/abs/2107.06383☆421Oct 28, 2022Updated 3 years ago
- [CVPR2023] The code for 《Position-guided Text Prompt for Vision-Language Pre-training》☆151Jun 7, 2023Updated 2 years ago
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"☆405Nov 10, 2023Updated 2 years ago
- project page for VinVL☆359Jul 26, 2023Updated 2 years ago
- Grounded Language-Image Pre-training☆2,573Jan 24, 2024Updated 2 years ago
- Prompt Learning for Vision-Language Models (IJCV'22, CVPR'22)☆2,173May 20, 2024Updated last year
- UniTAB: Unifying Text and Box Outputs for Grounded VL Modeling, ECCV 2022 (Oral Presentation)☆89Jun 12, 2023Updated 2 years ago
- ☆45Aug 14, 2023Updated 2 years ago
- Research code for ECCV 2020 paper "UNITER: UNiversal Image-TExt Representation Learning"☆800Jun 30, 2021Updated 4 years ago
- Oscar and VinVL☆1,052Aug 28, 2023Updated 2 years ago
- [ICLR 2022] "As-ViT: Auto-scaling Vision Transformers without Training" by Wuyang Chen, Wei Huang, Xianzhi Du, Xiaodan Song, Zhangyang Wa…☆76Feb 21, 2022Updated 3 years ago
- Data Release for VALUE Benchmark☆30Feb 16, 2022Updated 3 years ago
- Code for "Learning the Best Pooling Strategy for Visual Semantic Embedding", CVPR 2021 (Oral)☆164Aug 24, 2025Updated 5 months ago
- [CVPR 2022] The code for our paper 《Object-aware Video-language Pre-training for Retrieval》☆62May 25, 2022Updated 3 years ago
- Extended COCO Validation (ECCV) Caption dataset (ECCV 2022)☆56Mar 1, 2024Updated last year
- This repo contains codes and instructions for baselines in the VLUE benchmark.☆41Jul 16, 2022Updated 3 years ago
- [CVPR 2022] DenseCLIP: Language-Guided Dense Prediction with Context-Aware Prompting☆542Sep 15, 2023Updated 2 years ago
- Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence L…☆2,555Apr 24, 2024Updated last year
- Product1M☆90Oct 12, 2022Updated 3 years ago
- [CVPR 2021 Best Student Paper Honorable Mention, Oral] Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning…☆723Aug 8, 2023Updated 2 years ago
- An official implementation for "CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval"☆1,023Apr 12, 2024Updated last year
- PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation☆5,669Aug 5, 2024Updated last year
- Dataset and starting code for visual entailment dataset☆118Apr 21, 2022Updated 3 years ago
- Official Implementation of CoSMo: Content-Style Modulation for Image Retrieval with Text Feedback presented in CVPR 2021.☆66Sep 13, 2022Updated 3 years ago
- Video embeddings for retrieval with natural language queries☆342Feb 15, 2023Updated 2 years ago
- Bridging Vision and Language Model☆286Mar 27, 2023Updated 2 years ago
- CVPR 2021 Official Pytorch Code for UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training☆34Nov 9, 2021Updated 4 years ago