IntelLabs / GraVi-TLinks
Graph learning framework for long-term video understanding
☆70Updated 5 months ago
Alternatives and similar repositories for GraVi-T
Users that are interested in GraVi-T are comparing it to the libraries listed below
Sorting:
- [PR 2024] A large Cross-Modal Video Retrieval Dataset with Reading Comprehension☆28Updated last year
- ☆20Updated 7 months ago
- Code release for the paper "Egocentric Video Task Translation" (CVPR 2023 Highlight)☆33Updated 2 years ago
- [ICLR 2025] CREMA: Generalizable and Efficient Video-Language Reasoning via Multimodal Modular Fusion☆55Updated 5 months ago
- [CVPR'23 Highlight] AutoAD: Movie Description in Context.☆101Updated last year
- SIEVE: Multimodal Dataset Pruning using Image-Captioning Models (CVPR 2024)☆18Updated last year
- ☆25Updated 2 years ago
- Data-Efficient Multimodal Fusion on a Single GPU☆68Updated last year
- [ICLR2024] Codes and Models for COSA: Concatenated Sample Pretrained Vision-Language Foundation Model☆43Updated 11 months ago
- [CVPR 2023] HierVL Learning Hierarchical Video-Language Embeddings☆46Updated 2 years ago
- ☆45Updated 7 months ago
- Implementation of MaMMUT, a simple vision-encoder text-decoder architecture for multimodal tasks from Google, in Pytorch☆103Updated 2 years ago
- ☆33Updated 2 years ago
- NeurIPS'2023 official implementation code☆68Updated 2 years ago
- ☆58Updated 2 weeks ago
- Multi-model video-to-text by combining embeddings from Flan-T5 + CLIP + Whisper + SceneGraph. The 'backbone LLM' is pre-trained from scra…☆53Updated 2 years ago
- SMILE: A Multimodal Dataset for Understanding Laughter☆13Updated 2 years ago
- [ACCV 2024] Official Implementation of "AutoAD-Zero: A Training-Free Framework for Zero-Shot Audio Description". Junyu Xie, Tengda Han, M…☆26Updated 10 months ago
- ☆14Updated 3 years ago
- Official code for our CVPR 2023 paper: Test of Time: Instilling Video-Language Models with a Sense of Time☆46Updated last year
- Learning to cut end-to-end pretrained modules☆32Updated 8 months ago
- Code release for the paper "Progress-Aware Video Frame Captioning" (CVPR 2025)☆20Updated 5 months ago
- Official implementation for the paper "Transferring Visual Knowledge with Pre-trained Models for Multimodal Machine Translation", publish…☆20Updated last year
- VideoCC is a dataset containing (video-URL, caption) pairs for training video-text machine learning models. It is created using an automa…☆78Updated 3 years ago
- Offical code for the CVPR 2024 Paper: Separating the "Chirp" from the "Chat": Self-supervised Visual Grounding of Sound and Language☆85Updated last year
- Implementation of the model: "(MC-ViT)" from the paper: "Memory Consolidation Enables Long-Context Video Understanding"☆27Updated last month
- ViLMA: A Zero-Shot Benchmark for Linguistic and Temporal Grounding in Video-Language Models (ICLR 2024, Official Implementation)☆16Updated last year
- [CVPR 2023] Official code repository for "How you feelin'? Learning Emotions and Mental States in Movie Scenes". https://arxiv.org/abs/23…☆58Updated last year
- Code for “Pretrained Language Models as Visual Planners for Human Assistance”☆61Updated 2 years ago
- Official repo for the TMLR paper "Discffusion: Discriminative Diffusion Models as Few-shot Vision and Language Learners"☆30Updated last year