microsoft / XPretrainLinks
Multi-modality pre-training
☆502Updated last year
Alternatives and similar repositories for XPretrain
Users that are interested in XPretrain are comparing it to the libraries listed below
Sorting:
- [ICCV2023 Oral] Unmasked Teacher: Towards Training-Efficient Video Foundation Models☆336Updated last year
- Frozen in Time: A Joint Video and Image Encoder for End-to-End Retrieval [ICCV'21]☆369Updated 3 years ago
- [NIPS2023] Code and Model for VAST: A Vision-Audio-Subtitle-Text Omni-Modality Foundation Model and Dataset☆284Updated last year
- mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video (ICML 2023)☆229Updated 2 years ago
- GRiT: A Generative Region-to-text Transformer for Object Understanding (ECCV2024)☆331Updated last year
- [TPAMI2024] Codes and Models for VALOR: Vision-Audio-Language Omni-Perception Pretraining Model and Dataset☆296Updated 7 months ago
- [CVPR2023] All in One: Exploring Unified Video-Language Pre-training☆282Updated 2 years ago
- [NeurIPS 2021] Moment-DETR code and QVHighlights dataset☆323Updated last year
- Large-scale text-video dataset. 10 million captioned short videos.☆654Updated last year
- Summary about Video-to-Text datasets. This repository is part of the review paper *Bridging Vision and Language from the Video-to-Text Pe…☆126Updated last year
- An official implementation for "CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval"☆976Updated last year
- An official implementation for "X-CLIP: End-to-End Multi-grained Contrastive Learning for Video-Text Retrieval"☆170Updated last year
- Official Repository of ChatCaptioner☆465Updated 2 years ago
- [CVPR 2024] TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understanding☆389Updated 3 months ago
- [NeurIPS 2023] Self-Chained Image-Language Model for Video Localization and Question Answering☆187Updated last year
- Align and Prompt: Video-and-Language Pre-training with Entity Prompts☆188Updated 3 months ago
- [CVPR 2023] Official repository of paper titled "Fine-tuned CLIP models are efficient video learners".☆286Updated last year
- [CVPR 2024] Panda-70M: Captioning 70M Videos with Multiple Cross-Modality Teachers☆615Updated 9 months ago
- PG-Video-LLaVA: Pixel Grounding in Large Multimodal Video Models☆257Updated 2 weeks ago
- CLIPScore EMNLP code☆237Updated 2 years ago
- LaVIT: Empower the Large Language Model to Understand and Generate Visual Content☆590Updated 10 months ago
- [NeurIPS 2023 D&B] VidChapters-7M: Video Chapters at Scale☆194Updated last year
- MAD: A Scalable Dataset for Language Grounding in Videos from Movie Audio Descriptions☆167Updated last year
- UMT is a unified and flexible framework which can handle different input modality combinations, and output video moment retrieval and/or …☆224Updated last year
- ☆250Updated 2 years ago
- [ICCV 2023] UniVTG: Towards Unified Video-Language Temporal Grounding☆360Updated last year
- [CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understanding☆642Updated 6 months ago
- Easily create large video dataset from video urls☆626Updated last year
- [CVPR'2024 Highlight] Official PyTorch implementation of the paper "VTimeLLM: Empower LLM to Grasp Video Moments".☆286Updated last year
- GIT: A Generative Image-to-text Transformer for Vision and Language☆572Updated last year