ArrowLuo / CLIP4Clip
An official implementation for "CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval"
☆904Updated 9 months ago
Alternatives and similar repositories for CLIP4Clip:
Users that are interested in CLIP4Clip are comparing it to the libraries listed below
- A curated list of deep learning resources for video-text retrieval.☆602Updated last year
- Multi-modality pre-training☆479Updated 8 months ago
- An official implementation for " UniVL: A Unified Video and Language Pre-Training Model for Multimodal Understanding and Generation"☆343Updated 5 months ago
- X-VLM: Multi-Grained Vision Language Pre-Training (ICML 2022)☆462Updated 2 years ago
- [ICCV2023 Oral] Unmasked Teacher: Towards Training-Efficient Video Foundation Models☆312Updated 7 months ago
- [CVPR 2021 Best Student Paper Honorable Mention, Oral] Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning…☆713Updated last year
- ☆236Updated 2 years ago
- Frozen in Time: A Joint Video and Image Encoder for End-to-End Retrieval [ICCV'21]☆355Updated 2 years ago
- [ECCV2024] Video Foundation Models & Data for Multimodal Understanding☆1,553Updated this week
- This is the official implement of paper "ActionCLIP: A New Paradigm for Action Recognition"☆532Updated last year
- Code for ALBEF: a new vision-language pre-training method☆1,598Updated 2 years ago
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆642Updated 2 years ago
- Research code for CVPR 2022 paper "SwinBERT: End-to-End Transformers with Sparse Attention for Video Captioning"☆237Updated 2 years ago
- [TPAMI2024] Codes and Models for VALOR: Vision-Audio-Language Omni-Perception Pretraining Model and Dataset☆273Updated 3 weeks ago
- [NIPS2023] Code and Model for VAST: A Vision-Audio-Subtitle-Text Omni-Modality Foundation Model and Dataset☆259Updated 10 months ago
- 【ICLR 2024🔥】 Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment☆771Updated 9 months ago
- METER: A Multimodal End-to-end TransformER Framework☆364Updated 2 years ago
- Code for the ICML 2021 (long talk) paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"☆1,427Updated 9 months ago
- [NeurIPS 2021] Moment-DETR code and QVHighlights dataset☆282Updated 9 months ago
- [CVPR 2023] VideoMAE V2: Scaling Video Masked Autoencoders with Dual Masking☆561Updated 3 months ago
- GIT: A Generative Image-to-text Transformer for Vision and Language☆555Updated last year
- ☆573Updated last year
- [CVPR2023] All in One: Exploring Unified Video-Language Pre-training☆280Updated last year
- ☆494Updated 2 years ago
- A PyTorch Lightning solution to training OpenAI's CLIP from scratch.☆676Updated 2 years ago
- 前沿论文持续更新--视频时刻定位 or 时域语言定位 or 视频片段检索。☆242Updated last year
- [ICLR 2022] code for "How Much Can CLIP Benefit Vision-and-Language Tasks?" https://arxiv.org/abs/2107.06383☆409Updated 2 years ago
- An official implementation for "X-CLIP: End-to-End Multi-grained Contrastive Learning for Video-Text Retrieval"☆145Updated 9 months ago
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).☆1,163Updated 6 months ago
- [NeurIPS 2022 Spotlight] VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training☆1,421Updated last year