Multi-modality pre-training
☆510May 8, 2024Updated last year
Alternatives and similar repositories for XPretrain
Users that are interested in XPretrain are comparing it to the libraries listed below
Sorting:
- [CVPR 2024] Panda-70M: Captioning 70M Videos with Multiple Cross-Modality Teachers☆674Oct 25, 2024Updated last year
- Large-scale text-video dataset. 10 million captioned short videos.☆677Aug 14, 2024Updated last year
- [CVPR2023] All in One: Exploring Unified Video-Language Pre-training☆281Mar 25, 2023Updated 2 years ago
- ☆110Dec 23, 2022Updated 3 years ago
- An official implementation for "CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval"☆1,025Apr 12, 2024Updated last year
- [ECCV2024] Video Foundation Models & Data for Multimodal Understanding☆2,201Dec 15, 2025Updated 2 months ago
- Align and Prompt: Video-and-Language Pre-training with Entity Prompts☆188May 1, 2025Updated 10 months ago
- Easily create large video dataset from video urls☆650Jul 30, 2024Updated last year
- [ACL 2023] Official PyTorch code for Singularity model in "Revealing Single Frame Bias for Video-and-Language Learning"☆136May 5, 2023Updated 2 years ago
- [ICCV2023 Oral] Unmasked Teacher: Towards Training-Efficient Video Foundation Models☆347May 27, 2024Updated last year
- Code release for "Learning Video Representations from Large Language Models"☆536Oct 1, 2023Updated 2 years ago
- Video embeddings for retrieval with natural language queries☆342Feb 15, 2023Updated 3 years ago
- Frozen in Time: A Joint Video and Image Encoder for End-to-End Retrieval [ICCV'21]☆380May 19, 2022Updated 3 years ago
- [CVPR 2021 Best Student Paper Honorable Mention, Oral] Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning…☆723Aug 8, 2023Updated 2 years ago
- [TPAMI2024] Codes and Models for VALOR: Vision-Audio-Language Omni-Perception Pretraining Model and Dataset☆306Dec 25, 2024Updated last year
- VideoX: a collection of video cross-modal models☆1,061Jun 3, 2024Updated last year
- LAVIS - A One-stop Library for Language-Vision Intelligence☆11,167Nov 18, 2024Updated last year
- [EMNLP 2023 Demo] Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding☆3,124Jun 4, 2024Updated last year
- EVA Series: Visual Representation Fantasies from BAAI☆2,647Aug 1, 2024Updated last year
- [ACL 2024 🔥] Video-ChatGPT is a video conversation model capable of generating meaningful conversation about videos. It combines the cap…☆1,492Aug 5, 2025Updated 6 months ago
- [ECCV 22] LocVTP: Video-Text Pre-training for Temporal Localization☆39Jul 29, 2022Updated 3 years ago
- COYO-700M: Large-scale Image-Text Pair Dataset☆1,251Nov 30, 2022Updated 3 years ago
- [NeurIPS 2022 Spotlight] VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training☆1,681Dec 8, 2023Updated 2 years ago
- [CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understanding☆686Jan 29, 2025Updated last year
- Official code for "Bridging Video-text Retrieval with Multiple Choice Questions", CVPR 2022 (Oral).☆141Jul 20, 2022Updated 3 years ago
- Filtering, Distillation, and Hard Negatives for Vision-Language Pre-Training☆141Dec 16, 2025Updated 2 months ago
- A curated list of deep learning resources for video-text retrieval.☆643Oct 20, 2023Updated 2 years ago
- [CVPR 2023 Highlight & TPAMI] Video-Text as Game Players: Hierarchical Banzhaf Interaction for Cross-Modal Representation Learning☆126Dec 28, 2024Updated last year
- Grounded Language-Image Pre-training☆2,572Jan 24, 2024Updated 2 years ago
- Video datasets☆1,612Mar 8, 2023Updated 2 years ago
- [NIPS2023] Code and Model for VAST: A Vision-Audio-Subtitle-Text Omni-Modality Foundation Model and Dataset☆298Mar 14, 2024Updated last year
- 【CVPR'2023 Highlight & TPAMI】Cap4Video: What Can Auxiliary Captions Do for Text-Video Retrieval?☆256Nov 29, 2024Updated last year
- [ECCV 2022] A pytorch implementation for TS2-Net: Token Shift and Selection Transformer for Text-Video Retrieval☆79Nov 29, 2022Updated 3 years ago
- A Unified Framework for Video-Language Understanding☆61Jun 17, 2023Updated 2 years ago
- Official PyTorch implementation of the paper "Revisiting Temporal Modeling for CLIP-based Image-to-Video Knowledge Transferring"☆107Jan 28, 2024Updated 2 years ago
- Accelerating Vision-Language Pretraining with Free Language Modeling (CVPR 2023)☆32May 15, 2023Updated 2 years ago
- PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation☆5,681Aug 5, 2024Updated last year
- [NeurIPS 2022] Zero-Shot Video Question Answering via Frozen Bidirectional Language Models☆158Dec 9, 2024Updated last year
- Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence L…☆2,554Apr 24, 2024Updated last year