OpenGVLab / TimeSuite
[ICLR 2025] TimeSuite: Improving MLLMs for Long Video Understanding via Grounded Tuning
☆33Updated last month
Alternatives and similar repositories for TimeSuite
Users that are interested in TimeSuite are comparing it to the libraries listed below
Sorting:
- Official PyTorch code of GroundVQA (CVPR'24)☆61Updated 8 months ago
- The official repository for paper "PruneVid: Visual Token Pruning for Efficient Video Large Language Models".☆40Updated 3 months ago
- [CVPR 2024] Context-Guided Spatio-Temporal Video Grounding☆54Updated 10 months ago
- [AAAI 2025] Grounded Multi-Hop VideoQA in Long-Form Egocentric Videos☆24Updated last month
- 👾 E.T. Bench: Towards Open-Ended Event-Level Video-Language Understanding (NeurIPS 2024)☆58Updated 3 months ago
- [CVPR 2025] Official PyTorch code of "Enhancing Video-LLM Reasoning via Agent-of-Thoughts Distillation".☆28Updated 2 weeks ago
- ICCV2023: Disentangling Spatial and Temporal Learning for Efficient Image-to-Video Transfer Learning☆41Updated last year
- [AAAI 2024] DGL: Dynamic Global-Local Prompt Tuning for Text-Video Retrieval.☆41Updated 7 months ago
- FreeVA: Offline MLLM as Training-Free Video Assistant☆61Updated 11 months ago
- ☆39Updated last year
- [CVPR 2025] Adaptive Keyframe Sampling for Long Video Understanding☆60Updated 3 weeks ago
- ☆29Updated 8 months ago
- (NeurIPS 2024 Spotlight) TOPA: Extend Large Language Models for Video Understanding via Text-Only Pre-Alignment☆30Updated 7 months ago
- Official implementation of HawkEye: Training Video-Text LLMs for Grounding Text in Videos☆41Updated last year
- [CVPR 2024] Official PyTorch implementation of the paper "One For All: Video Conversation is Feasible Without Video Instruction Tuning"☆32Updated last year
- The official repository for ICLR2024 paper "FROSTER: Frozen CLIP is a Strong Teacher for Open-Vocabulary Action Recognition"☆79Updated 4 months ago
- Envolving Temporal Reasoning Capability into LMMs via Temporal Consistent Reward☆35Updated last month
- Can I Trust Your Answer? Visually Grounded Video Question Answering (CVPR'24, Highlight)☆71Updated 10 months ago
- [CVPR 2025] LLaVA-ST: A Multimodal Large Language Model for Fine-Grained Spatial-Temporal Understanding☆42Updated 2 months ago
- [ICLR 2025] TRACE: Temporal Grounding Video LLM via Casual Event Modeling☆94Updated 3 months ago
- ☆71Updated 5 months ago
- [CVPR2025] Number it: Temporal Grounding Videos like Flipping Manga☆79Updated last month
- Grounded-VideoLLM: Sharpening Fine-grained Temporal Grounding in Video Large Language Models☆106Updated last month
- Official PyTorch Code of ReKV (ICLR'25)☆17Updated 2 months ago
- Offical repo for CAT-V - Caption Anything in Video: Object-centric Dense Video Captioning with Spatiotemporal Multimodal Prompting☆37Updated 2 weeks ago
- Composed Video Retrieval☆57Updated last year
- A Versatile Video-LLM for Long and Short Video Understanding with Superior Temporal Localization Ability☆94Updated 5 months ago
- Code implementation of paper "MUSE: Mamba is Efficient Multi-scale Learner for Text-video Retrieval (AAAI2025)"☆19Updated 3 months ago
- [CVPR 2025 🔥]A Large Multimodal Model for Pixel-Level Visual Grounding in Videos☆64Updated last month
- LongVALE: Vision-Audio-Language-Event Benchmark Towards Time-Aware Omni-Modal Perception of Long Videos. (CVPR 2025))☆20Updated last month