microsoft / XPretrainLinks
Multi-modality pre-training
☆495Updated last year
Alternatives and similar repositories for XPretrain
Users that are interested in XPretrain are comparing it to the libraries listed below
Sorting:
- [ICCV2023 Oral] Unmasked Teacher: Towards Training-Efficient Video Foundation Models☆330Updated last year
- Large-scale text-video dataset. 10 million captioned short videos.☆642Updated 10 months ago
- Frozen in Time: A Joint Video and Image Encoder for End-to-End Retrieval [ICCV'21]☆360Updated 3 years ago
- An official implementation for "CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval"☆958Updated last year
- GRiT: A Generative Region-to-text Transformer for Object Understanding (ECCV2024)☆327Updated last year
- [CVPR2023] All in One: Exploring Unified Video-Language Pre-training☆283Updated 2 years ago
- Align and Prompt: Video-and-Language Pre-training with Entity Prompts☆188Updated last month
- mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video (ICML 2023)☆227Updated last year
- [NeurIPS 2021] Moment-DETR code and QVHighlights dataset☆308Updated last year
- [TPAMI2024] Codes and Models for VALOR: Vision-Audio-Language Omni-Perception Pretraining Model and Dataset☆292Updated 5 months ago
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆657Updated 2 years ago
- Recent Advances in Vision and Language Pre-training (VLP)☆293Updated 2 years ago
- An official implementation for "X-CLIP: End-to-End Multi-grained Contrastive Learning for Video-Text Retrieval"☆162Updated last year
- Official Repository of ChatCaptioner☆464Updated 2 years ago
- [CVPR 2024] Panda-70M: Captioning 70M Videos with Multiple Cross-Modality Teachers☆604Updated 7 months ago
- [CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understanding☆630Updated 4 months ago
- LaVIT: Empower the Large Language Model to Understand and Generate Visual Content☆583Updated 8 months ago
- [ICLR 2022] code for "How Much Can CLIP Benefit Vision-and-Language Tasks?" https://arxiv.org/abs/2107.06383☆412Updated 2 years ago
- 🐟 Code and models for the NeurIPS 2023 paper "Generating Images with Multimodal Language Models".☆456Updated last year
- Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.☆392Updated 2 years ago
- [CVPR 2023] Official repository of paper titled "Fine-tuned CLIP models are efficient video learners".☆278Updated last year
- Summary about Video-to-Text datasets. This repository is part of the review paper *Bridging Vision and Language from the Video-to-Text Pe…☆124Updated last year
- [CVPR 2024] TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understanding☆378Updated last month
- CLIPScore EMNLP code☆226Updated 2 years ago
- ☆246Updated 2 years ago
- [NIPS2023] Code and Model for VAST: A Vision-Audio-Subtitle-Text Omni-Modality Foundation Model and Dataset☆279Updated last year
- A curated list of deep learning resources for video-text retrieval.☆623Updated last year
- Official code for "Bridging Video-text Retrieval with Multiple Choice Questions", CVPR 2022 (Oral).☆139Updated 2 years ago
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"☆399Updated last year
- PG-Video-LLaVA: Pixel Grounding in Large Multimodal Video Models☆256Updated last year