ArrowLuo / CLIP4ClipLinks
An official implementation for "CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval"
☆1,006Updated last year
Alternatives and similar repositories for CLIP4Clip
Users that are interested in CLIP4Clip are comparing it to the libraries listed below
Sorting:
- A curated list of deep learning resources for video-text retrieval.☆638Updated 2 years ago
- Multi-modality pre-training☆505Updated last year
- X-VLM: Multi-Grained Vision Language Pre-Training (ICML 2022)☆487Updated 3 years ago
- [CVPR 2021 Best Student Paper Honorable Mention, Oral] Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning…☆724Updated 2 years ago
- An official implementation for " UniVL: A Unified Video and Language Pre-Training Model for Multimodal Understanding and Generation"☆362Updated last year
- ☆256Updated 2 years ago
- [NeurIPS 2021] Moment-DETR code and QVHighlights dataset☆334Updated last year
- [ICCV2023 Oral] Unmasked Teacher: Towards Training-Efficient Video Foundation Models☆341Updated last year
- This is the official implement of paper "ActionCLIP: A New Paradigm for Action Recognition"☆592Updated last year
- GIT: A Generative Image-to-text Transformer for Vision and Language☆577Updated 2 years ago
- Code for the ICML 2021 (long talk) paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"☆1,507Updated last year
- Frozen in Time: A Joint Video and Image Encoder for End-to-End Retrieval [ICCV'21]☆377Updated 3 years ago
- Code for ALBEF: a new vision-language pre-training method☆1,734Updated 3 years ago
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆669Updated 3 years ago
- [TPAMI2024] Codes and Models for VALOR: Vision-Audio-Language Omni-Perception Pretraining Model and Dataset☆305Updated 11 months ago
- ☆646Updated 2 years ago
- Research code for CVPR 2022 paper "SwinBERT: End-to-End Transformers with Sparse Attention for Video Captioning"☆246Updated 3 years ago
- VideoX: a collection of video cross-modal models☆1,047Updated last year
- [NIPS2023] Code and Model for VAST: A Vision-Audio-Subtitle-Text Omni-Modality Foundation Model and Dataset☆294Updated last year
- mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video (ICML 2023)☆228Updated 2 years ago
- Contrastive Language-Image Forensic Search allows free text searching through videos using OpenAI's machine learning model CLIP☆480Updated 3 years ago
- UMT is a unified and flexible framework which can handle different input modality combinations, and output video moment retrieval and/or …☆232Updated last year
- An official implementation for "X-CLIP: End-to-End Multi-grained Contrastive Learning for Video-Text Retrieval"☆178Updated last year
- 前沿论文持续更新--视频时刻定位 or 时域语言定位 or 视频片段检索。☆257Updated 2 years ago
- METER: A Multimodal End-to-end TransformER Framework☆374Updated 3 years ago
- ☆559Updated 3 years ago
- [ICLR 2022] code for "How Much Can CLIP Benefit Vision-and-Language Tasks?" https://arxiv.org/abs/2107.06383☆418Updated 3 years ago
- [ECCV2024] Video Foundation Models & Data for Multimodal Understanding☆2,114Updated 3 months ago
- Recent Advances in Vision and Language Pre-training (VLP)☆296Updated 2 years ago
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch☆1,185Updated last year