zinengtang / TVLTLinks
PyTorch code for “TVLT: Textless Vision-Language Transformer” (NeurIPS 2022 Oral)
☆124Updated 2 years ago
Alternatives and similar repositories for TVLT
Users that are interested in TVLT are comparing it to the libraries listed below
Sorting:
- Code release for "MERLOT Reserve: Neural Script Knowledge through Vision and Language and Sound"☆146Updated 3 years ago
- Align and Prompt: Video-and-Language Pre-training with Entity Prompts☆188Updated 7 months ago
- ☆110Updated 2 years ago
- Code for the AVLnet (Interspeech 2021) and Cascaded Multilingual (Interspeech 2021) papers.☆53Updated 3 years ago
- [ACL 2023] Official PyTorch code for Singularity model in "Revealing Single Frame Bias for Video-and-Language Learning"☆136Updated 2 years ago
- Summary about Video-to-Text datasets. This repository is part of the review paper *Bridging Vision and Language from the Video-to-Text Pe…☆131Updated 2 years ago
- ☆31Updated 4 years ago
- ☆76Updated 3 years ago
- [CVPR'23 Highlight] AutoAD: Movie Description in Context.☆101Updated last year
- ☆57Updated last week
- MUSIC-AVQA, CVPR2022 (ORAL)☆90Updated 2 years ago
- MAD: A Scalable Dataset for Language Grounding in Videos from Movie Audio Descriptions☆169Updated 2 years ago
- CapDec: SOTA Zero Shot Image Captioning Using CLIP and GPT2, EMNLP 2022 (findings)☆202Updated last year
- [NeurIPS 2023] Self-Chained Image-Language Model for Video Localization and Question Answering☆190Updated last year
- Hierarchical Video-Moment Retrieval and Step-Captioning (CVPR 2023)☆107Updated 10 months ago
- Official code for our CVPR 2023 paper: Test of Time: Instilling Video-Language Models with a Sense of Time☆46Updated last year
- A PyTorch implementation of EmpiricalMVM☆41Updated last year
- A Unified Framework for Video-Language Understanding☆61Updated 2 years ago
- multimodal video-audio-text generation and retrieval between every pair of modalities on the MUGEN dataset. The repo. contains the traini…☆40Updated 2 years ago
- Pytorch code for Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners☆115Updated 3 years ago
- ☆120Updated 2 years ago
- Use CLIP to represent video for Retrieval Task☆70Updated 4 years ago
- This repository contains the code for our CVPR 2022 paper on "Audio-visual Generalised Zero-shot Learning with Cross-modal Attention and …☆41Updated 3 years ago
- [ICLR2024] The official implementation of paper "UniAdapter: Unified Parameter-Efficient Transfer Learning for Cross-modal Modeling", by …☆77Updated last year
- VideoCC is a dataset containing (video-URL, caption) pairs for training video-text machine learning models. It is created using an automa…☆78Updated 3 years ago
- ☆22Updated 2 years ago
- PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks" (CVPR2022)☆208Updated 2 years ago
- Sapsucker Woods 60 Audiovisual Dataset☆17Updated 3 years ago
- This repository contains the code for our ECCV 2022 paper "Temporal and cross-modal attention for audio-visual zero-shot learning"☆25Updated 3 months ago
- [TPAMI2024] Codes and Models for VALOR: Vision-Audio-Language Omni-Perception Pretraining Model and Dataset☆305Updated 11 months ago