IntelLabs / GraVi-T
Graph learning framework for long-term video understanding
☆49Updated 5 months ago
Related projects: ⓘ
- Official Implementation of "AutoAD-Zero: A Training-Free Framework for Zero-Shot Audio Description". Junyu Xie, Tengda Han, Max Bain, Ars…☆15Updated last month
- Data-Efficient Multimodal Fusion on a Single GPU☆45Updated 4 months ago
- ☆20Updated 9 months ago
- [CVPR'23 Highlight] AutoAD: Movie Description in Context.☆85Updated last month
- Learning Long-Term Spatial-Temporal Graphs for Active Speaker Detection (ECCV 2022)☆64Updated 10 months ago
- Official code for our CVPR 2023 paper: Test of Time: Instilling Video-Language Models with a Sense of Time☆45Updated 3 months ago
- ☆31Updated 3 years ago
- Multi-model video-to-text by combining embeddings from Flan-T5 + CLIP + Whisper + SceneGraph. The 'backbone LLM' is pre-trained from scra…☆49Updated last year
- ☆30Updated last year
- ☆35Updated 11 months ago
- SMILE: A Multimodal Dataset for Understanding Laughter☆13Updated last year
- Sapsucker Woods 60 Audiovisual Dataset☆14Updated last year
- Learning to cut end-to-end pretrained modules☆26Updated 2 months ago
- VideoCC is a dataset containing (video-URL, caption) pairs for training video-text machine learning models. It is created using an automa…☆76Updated last year
- This repo contains source code for Glance and Focus: Memory Prompting for Multi-Event Video Question Answering (Accepted in NeurIPS 2023)☆19Updated 2 months ago
- Implementation of MaMMUT, a simple vision-encoder text-decoder architecture for multimodal tasks from Google, in Pytorch☆97Updated 11 months ago
- Codes and Models for COSA: Concatenated Sample Pretrained Vision-Language Foundation Model☆38Updated last year
- Pytorch Implementation of the Model from "MIRASOL3B: A MULTIMODAL AUTOREGRESSIVE MODEL FOR TIME-ALIGNED AND CONTEXTUAL MODALITIES"☆24Updated last week
- Code release for the paper "Egocentric Video Task Translation" (CVPR 2023 Highlight)☆31Updated last year
- ACAV100M: Automatic Curation of Large-Scale Datasets for Audio-Visual Video Representation Learning. In ICCV, 2021.☆55Updated 2 years ago
- ☆47Updated 11 months ago
- FG 2024 Papers: Explore a comprehensive collection of research papers presented at one of the premier conferences on automatic face and g…☆10Updated 4 months ago
- ☆17Updated 5 months ago
- ☆58Updated 10 months ago
- code repo for LoCoNet: Long-Short Context Network for Active Speaker Detection☆16Updated last year
- [CVPR 2024] Code and models for pi-ViT, a video transformer for understanding activities of daily living☆11Updated 3 weeks ago
- LAVIS - A One-stop Library for Language-Vision Intelligence☆47Updated last month
- ☆52Updated 4 months ago
- Codebase for the paper: "TIM: A Time Interval Machine for Audio-Visual Action Recognition"☆35Updated 3 weeks ago