[NIPS2023] Code and Model for VAST: A Vision-Audio-Subtitle-Text Omni-Modality Foundation Model and Dataset
☆298Mar 14, 2024Updated 2 years ago
Alternatives and similar repositories for VAST
Users that are interested in VAST are comparing it to the libraries listed below
Sorting:
- [TPAMI2024] Codes and Models for VALOR: Vision-Audio-Language Omni-Perception Pretraining Model and Dataset☆307Dec 25, 2024Updated last year
- Official PyTorch implementation of the paper "Enhancing Vision-Language Pre-Training with Jointly Learned Questioner and Dense Captioner"☆15Aug 9, 2023Updated 2 years ago
- [ICLR2024] Codes and Models for COSA: Concatenated Sample Pretrained Vision-Language Foundation Model☆43Dec 25, 2024Updated last year
- mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video (ICML 2023)☆228Jul 21, 2023Updated 2 years ago
- [ECCV2024] Video Foundation Models & Data for Multimodal Understanding☆2,219Dec 15, 2025Updated 3 months ago
- [ICCV2023 Oral] Unmasked Teacher: Towards Training-Efficient Video Foundation Models☆346May 27, 2024Updated last year
- ☆80Nov 24, 2024Updated last year
- This reporsitory contains metadata of WavCaps dataset and codes for downstream tasks.☆257Jul 25, 2024Updated last year
- 【CVPR'2023 Highlight & TPAMI】Cap4Video: What Can Auxiliary Captions Do for Text-Video Retrieval?☆258Nov 29, 2024Updated last year
- Multi-modality pre-training☆510May 8, 2024Updated last year
- [ICLR2026] VideoChat-Flash: Hierarchical Compression for Long-Context Video Modeling☆511Nov 18, 2025Updated 4 months ago
- 【ICLR 2024🔥】 Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment☆874Mar 25, 2024Updated last year
- Code and Pretrained Models for ICLR 2023 Paper "Contrastive Audio-Visual Masked Autoencoder".☆287Mar 20, 2024Updated 2 years ago
- [EMNLP 2023 Demo] Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding☆3,134Jun 4, 2024Updated last year
- An official implementation for "CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval"☆1,025Apr 12, 2024Updated last year
- [CVPR 2024] TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understanding☆413May 8, 2025Updated 10 months ago
- Official implementation of CVPR 2024 paper "vid-TLDR: Training Free Token merging for Light-weight Video Transformer".☆55Oct 21, 2025Updated 5 months ago
- Source code for the paper 'Audio Captioning Transformer'☆56Jan 18, 2022Updated 4 years ago
- [PR 2024] A large Cross-Modal Video Retrieval Dataset with Reading Comprehension☆28Dec 28, 2023Updated 2 years ago
- [CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understanding☆689Jan 29, 2025Updated last year
- ☆34Mar 10, 2023Updated 3 years ago
- Unified Multisensory Perception: Weakly-Supervised Audio-Visual Video Parsing, ECCV, 2020. (Spotlight)☆90Jul 25, 2024Updated last year
- ACAV100M: Automatic Curation of Large-Scale Datasets for Audio-Visual Video Representation Learning. In ICCV, 2021.☆63Nov 18, 2021Updated 4 years ago
- [NeurIPS 2022 Spotlight] Expectation-Maximization Contrastive Learning for Compact Video-and-Language Representations☆144Apr 9, 2024Updated last year
- This repository contains the dataset, codebase, and benchmarks for our paper: <CNVid-3.5M: Build, Filter, and Pre-train the Large-scale P…☆25Nov 28, 2023Updated 2 years ago
- VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs☆1,284Jan 23, 2025Updated last year
- Official Codebase of "A Closer Look at Weakly-Supervised Audio-Visual Source Localization" (NeurIPS 2022)☆20Dec 6, 2022Updated 3 years ago
- Official Implementation of EnCLAP (ICASSP 2024)☆94Jun 2, 2024Updated last year
- [ECCV’24] Official Implementation for CAT: Enhancing Multimodal Large Language Model to Answer Questions in Dynamic Audio-Visual Scenario…☆58Sep 4, 2024Updated last year
- [NeurIPS 2023 D&B] VidChapters-7M: Video Chapters at Scale☆205Nov 13, 2023Updated 2 years ago
- [ECCV2024] Official code implementation of Merlin: Empowering Multimodal LLMs with Foresight Minds☆96Jul 4, 2024Updated last year
- A 6-million Audio-Caption Paired Dataset Built with a LLMs and ALMs-based Automatic Pipeline☆197Dec 13, 2024Updated last year
- The official code of Towards Balanced Alignment: Modal-Enhanced Semantic Modeling for Video Moment Retrieval (AAAI2024)☆32Mar 29, 2024Updated last year
- Learning audio concepts from natural language supervision☆651Sep 18, 2024Updated last year
- A lightweight flexible Video-MLLM developed by TencentQQ Multimedia Research Team.☆74Oct 14, 2024Updated last year
- Vision Transformers are Parameter-Efficient Audio-Visual Learners☆107Aug 11, 2023Updated 2 years ago
- [ACL 2024 🔥] Video-ChatGPT is a video conversation model capable of generating meaningful conversation about videos. It combines the cap…☆1,499Aug 5, 2025Updated 7 months ago
- [NeurIPS 2022] Zero-Shot Video Question Answering via Frozen Bidirectional Language Models☆158Dec 9, 2024Updated last year
- [CVPR'2024 Highlight] Official PyTorch implementation of the paper "VTimeLLM: Empower LLM to Grasp Video Moments".☆296Jun 13, 2024Updated last year