IntelLabs / GraVi-TLinks
Graph learning framework for long-term video understanding
☆63Updated 2 weeks ago
Alternatives and similar repositories for GraVi-T
Users that are interested in GraVi-T are comparing it to the libraries listed below
Sorting:
- Learning Long-Term Spatial-Temporal Graphs for Active Speaker Detection (ECCV 2022)☆65Updated last year
- Code release for the paper "Egocentric Video Task Translation" (CVPR 2023 Highlight)☆32Updated 2 years ago
- ☆43Updated last month
- Learning to cut end-to-end pretrained modules☆32Updated 2 months ago
- [CVPR 2023] HierVL Learning Hierarchical Video-Language Embeddings☆46Updated last year
- [ICLR 2025] CREMA: Generalizable and Efficient Video-Language Reasoning via Multimodal Modular Fusion☆46Updated 5 months ago
- [AAAI 2023 (Oral)] CrissCross: Self-Supervised Audio-Visual Representation Learning with Relaxed Cross-Modal Synchronicity☆25Updated last year
- multimodal video-audio-text generation and retrieval between every pair of modalities on the MUGEN dataset. The repo. contains the traini…☆40Updated 2 years ago
- Code for CVPR 2023 paper "Procedure-Aware Pretraining for Instructional Video Understanding"☆49Updated 5 months ago
- [CVPR'23 Highlight] AutoAD: Movie Description in Context.☆100Updated 7 months ago
- Code release for "EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone" [ICCV, 2023]☆99Updated 11 months ago
- Codebase for the paper: "TIM: A Time Interval Machine for Audio-Visual Action Recognition"☆41Updated 7 months ago
- code repo for LoCoNet: Long-Short Context Network for Active Speaker Detection☆36Updated 2 years ago
- ☆32Updated 2 years ago
- ☆55Updated 2 years ago
- [CVPR 2023] Official code for "Learning Procedure-aware Video Representation from Instructional Videos and Their Narrations"☆53Updated last year
- ☆22Updated last year
- [ACCV 2024] Official Implementation of "AutoAD-Zero: A Training-Free Framework for Zero-Shot Audio Description". Junyu Xie, Tengda Han, M…☆25Updated 4 months ago
- ☆19Updated last month
- Official code for our CVPR 2023 paper: Test of Time: Instilling Video-Language Models with a Sense of Time☆45Updated last year
- ☆29Updated 2 years ago
- ☆31Updated 3 years ago
- The official codebase of FineAction dataset. We will update the data and code of our FineAction.☆18Updated 2 months ago
- ☆72Updated last year
- Pytorch Implementation of the Model from "MIRASOL3B: A MULTIMODAL AUTOREGRESSIVE MODEL FOR TIME-ALIGNED AND CONTEXTUAL MODALITIES"☆26Updated 5 months ago
- PyTorch code for “TVLT: Textless Vision-Language Transformer” (NeurIPS 2022 Oral)☆125Updated 2 years ago
- Official code implemtation of paper AntGPT: Can Large Language Models Help Long-term Action Anticipation from Videos?☆21Updated 9 months ago
- Hierarchical Video-Moment Retrieval and Step-Captioning (CVPR 2023)☆101Updated 5 months ago
- [TMM 2023] VideoXum: Cross-modal Visual and Textural Summarization of Videos☆45Updated last year
- Implementation of MaMMUT, a simple vision-encoder text-decoder architecture for multimodal tasks from Google, in Pytorch☆103Updated last year