V-STaR-Bench / V-STaRLinks
Benchmarking Video-LLMs on Video Spatio-Temporal Reasoning
☆27Updated 2 months ago
Alternatives and similar repositories for V-STaR
Users that are interested in V-STaR are comparing it to the libraries listed below
Sorting:
- Offical repo for CAT-V - Caption Anything in Video: Object-centric Dense Video Captioning with Spatiotemporal Multimodal Prompting☆55Updated 3 months ago
- [ECCV 2024] Learning Video Context as Interleaved Multimodal Sequences☆40Updated 7 months ago
- ☆37Updated last month
- FreeVA: Offline MLLM as Training-Free Video Assistant☆63Updated last year
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆60Updated 2 months ago
- Repo for paper "T2Vid: Translating Long Text into Multi-Image is the Catalyst for Video-LLMs"☆48Updated last month
- ☆57Updated last year
- Official Implementation (Pytorch) of the "VidChain: Chain-of-Tasks with Metric-based Direct Preference Optimization for Dense Video Capti…☆22Updated 8 months ago
- Reinforcement Learning Tuning for VideoLLMs: Reward Design and Data Efficiency☆55Updated 4 months ago
- [NeurlPS 2024] One Token to Seg Them All: Language Instructed Reasoning Segmentation in Videos☆137Updated 9 months ago
- Official Implementation for "SiLVR : A Simple Language-based Video Reasoning Framework"☆18Updated last month
- ☆90Updated 3 months ago
- [ECCV 2024] ControlCap: Controllable Region-level Captioning☆79Updated 11 months ago
- Code for "CAFe: Unifying Representation and Generation with Contrastive-Autoregressive Finetuning"☆24Updated 6 months ago
- Video-Holmes: Can MLLM Think Like Holmes for Complex Video Reasoning?☆73Updated 3 months ago
- Emergent Visual Grounding in Large Multimodal Models Without Grounding Supervision☆40Updated 6 months ago
- Code for "AVG-LLaVA: A Multimodal Large Model with Adaptive Visual Granularity"☆32Updated last year
- [EMNLP 2025 Findings] Grounded-VideoLLM: Sharpening Fine-grained Temporal Grounding in Video Large Language Models☆126Updated last month
- ☆36Updated last year
- [ICCV 2025] Official code for "AIM: Adaptive Inference of Multi-Modal LLMs via Token Merging and Pruning"☆39Updated last week
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆87Updated last year
- [CVPR 2024] Context-Guided Spatio-Temporal Video Grounding☆59Updated last year
- [ECCV 2024] Elysium: Exploring Object-level Perception in Videos via MLLM☆82Updated 11 months ago
- [EMNLP-2025 Oral] ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration☆57Updated last month
- Official PyTorch code of GroundVQA (CVPR'24)☆62Updated last year
- Official Repository of Personalized Visual Instruct Tuning☆32Updated 7 months ago
- [ICLR 2025] TimeSuite: Improving MLLMs for Long Video Understanding via Grounded Tuning☆45Updated 6 months ago
- ☆23Updated 4 months ago
- ☆74Updated 3 months ago
- Official implement of MIA-DPO☆66Updated 8 months ago