IntelLabs / GraVi-TLinks
Graph learning framework for long-term video understanding
☆67Updated 3 months ago
Alternatives and similar repositories for GraVi-T
Users that are interested in GraVi-T are comparing it to the libraries listed below
Sorting:
- Code release for the paper "Egocentric Video Task Translation" (CVPR 2023 Highlight)☆33Updated 2 years ago
- [PR 2024] A large Cross-Modal Video Retrieval Dataset with Reading Comprehension☆28Updated last year
- ☆20Updated 5 months ago
- [ICLR 2025] CREMA: Generalizable and Efficient Video-Language Reasoning via Multimodal Modular Fusion☆53Updated 3 months ago
- ☆24Updated 2 years ago
- Video-LlaVA fine-tune for CinePile evaluation☆51Updated last year
- NeurIPS'2023 official implementation code☆66Updated last year
- [CVPR 2023] HierVL Learning Hierarchical Video-Language Embeddings☆46Updated 2 years ago
- Data-Efficient Multimodal Fusion on a Single GPU☆67Updated last year
- [CVPR'23 Highlight] AutoAD: Movie Description in Context.☆100Updated 11 months ago
- LAVIS - A One-stop Library for Language-Vision Intelligence☆48Updated last year
- Code release for the paper "Progress-Aware Video Frame Captioning" (CVPR 2025)☆18Updated 3 months ago
- Implementation of the proposed LVMAE, from the paper, Extending Video Masked Autoencoders to 128 frames, in Pytorch☆54Updated 10 months ago
- ☆44Updated 4 months ago
- ☆56Updated 3 years ago
- ☆57Updated last year
- SIEVE: Multimodal Dataset Pruning using Image-Captioning Models (CVPR 2024)☆17Updated last year
- Official PyTorch implementation of "No Time to Waste: Squeeze Time into Channel for Mobile Video Understanding"☆32Updated last year
- Multi-model video-to-text by combining embeddings from Flan-T5 + CLIP + Whisper + SceneGraph. The 'backbone LLM' is pre-trained from scra…☆52Updated 2 years ago
- [CVPR 2023] Official code repository for "How you feelin'? Learning Emotions and Mental States in Movie Scenes". https://arxiv.org/abs/23…☆56Updated last year
- Implementation of MaMMUT, a simple vision-encoder text-decoder architecture for multimodal tasks from Google, in Pytorch☆102Updated 2 years ago
- ViLMA: A Zero-Shot Benchmark for Linguistic and Temporal Grounding in Video-Language Models (ICLR 2024, Official Implementation)☆16Updated last year
- ☆30Updated 2 years ago
- Code for the Video Similarity Challenge.☆80Updated last year
- Implementation of the model: "(MC-ViT)" from the paper: "Memory Consolidation Enables Long-Context Video Understanding"☆22Updated last week
- Code and models for the paper "The effectiveness of MAE pre-pretraining for billion-scale pretraining" https://arxiv.org/abs/2303.13496☆91Updated 6 months ago
- INF-LLaVA: Dual-perspective Perception for High-Resolution Multimodal Large Language Model☆42Updated last year
- VideoCC is a dataset containing (video-URL, caption) pairs for training video-text machine learning models. It is created using an automa…☆78Updated 2 years ago
- UnifiedMLLM: Enabling Unified Representation for Multi-modal Multi-tasks With Large Language Model☆22Updated last year
- [CVPR2023] Code for "Streaming Video Model"☆78Updated 2 years ago