Neleac / SpaceTimeGPTLinks
video description generation vision-language model
☆20Updated 9 months ago
Alternatives and similar repositories for SpaceTimeGPT
Users that are interested in SpaceTimeGPT are comparing it to the libraries listed below
Sorting:
- ☆188Updated last year
- ViLMA: A Zero-Shot Benchmark for Linguistic and Temporal Grounding in Video-Language Models (ICLR 2024, Official Implementation)☆16Updated last year
- Supercharged BLIP-2 that can handle videos☆122Updated last year
- [CVPR 2023] HierVL Learning Hierarchical Video-Language Embeddings☆46Updated 2 years ago
- [NeurIPS 2023 D&B] VidChapters-7M: Video Chapters at Scale☆198Updated last year
- [ICCVW 25] LLaVA-MORE: A Comparative Study of LLMs and Visual Backbones for Enhanced Visual Instruction Tuning☆155Updated 3 months ago
- (WACV 2025 - Oral) Vision-language conversation in 10 languages including English, Chinese, French, Spanish, Russian, Japanese, Arabic, H…☆82Updated 3 months ago
- Code for our ACL 2025 paper "Language Repository for Long Video Understanding"☆32Updated last year
- EILeV: Eliciting In-Context Learning in Vision-Language Models for Videos Through Curated Data Distributional Properties☆131Updated 11 months ago
- 🤖 [ICLR'25] Multimodal Video Understanding Framework (MVU)☆49Updated 9 months ago
- Official implementation for "A Simple LLM Framework for Long-Range Video Question-Answering"☆101Updated last year
- [ICLR 2025] Video-STaR: Self-Training Enables Video Instruction Tuning with Any Supervision☆70Updated last year
- Multi-model video-to-text by combining embeddings from Flan-T5 + CLIP + Whisper + SceneGraph. The 'backbone LLM' is pre-trained from scra…☆52Updated 2 years ago
- ☆84Updated 2 years ago
- Code release for "EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone" [ICCV, 2023]☆100Updated last year
- ☆57Updated last year
- ☆65Updated 2 years ago
- ✨✨Beyond LLaVA-HD: Diving into High-Resolution Large Multimodal Models☆162Updated 10 months ago
- ☆99Updated last year
- PG-Video-LLaVA: Pixel Grounding in Large Multimodal Video Models☆259Updated 3 months ago
- Official code for our CVPR 2023 paper: Test of Time: Instilling Video-Language Models with a Sense of Time☆46Updated last year
- SMILE: A Multimodal Dataset for Understanding Laughter☆12Updated 2 years ago
- ☆79Updated last year
- ☆138Updated last year
- Make Your Training Flexible: Towards Deployment-Efficient Video Models☆31Updated 4 months ago
- Official implementation of our paper "Finetuned Multimodal Language Models are High-Quality Image-Text Data Filters".☆67Updated 6 months ago
- [ICLR 2025] CREMA: Generalizable and Efficient Video-Language Reasoning via Multimodal Modular Fusion☆54Updated 4 months ago
- ☆69Updated last year
- [ICCV'25] HERMES: temporal-coHERent long-forM understanding with Episodes and Semantics☆36Updated last month
- Benchmarking Panoptic Video Scene Graph Generation (PVSG), CVPR'23☆99Updated last year