TIGER-AI-Lab / VISTALinks
The code for "VISTA: Enhancing Long-Duration and High-Resolution Video Understanding by VIdeo SpatioTemporal Augmentation" [CVPR2025]
☆19Updated 5 months ago
Alternatives and similar repositories for VISTA
Users that are interested in VISTA are comparing it to the libraries listed below
Sorting:
- ☆30Updated last year
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMs☆48Updated 4 months ago
- [NeurIPS-24] This is the official implementation of the paper "DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effect…☆38Updated last year
- [ICCV 2025] Dynamic-VLM☆23Updated 7 months ago
- [ArXiv] V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding☆54Updated 7 months ago
- ☆43Updated 8 months ago
- Official implementation of "Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology"☆48Updated 3 weeks ago
- Video-Holmes: Can MLLM Think Like Holmes for Complex Video Reasoning?☆62Updated 3 weeks ago
- ☆45Updated 7 months ago
- Multimodal RewardBench☆42Updated 5 months ago
- ☆66Updated last month
- M2-Reasoning: Empowering MLLMs with Unified General and Spatial Reasoning☆35Updated 2 weeks ago
- Official repo of the ICLR 2025 paper "MMWorld: Towards Multi-discipline Multi-faceted World Model Evaluation in Videos"☆28Updated 3 weeks ago
- Official implementation of Next Block Prediction: Video Generation via Semi-Autoregressive Modeling☆38Updated 5 months ago
- [CVPR 2025] PVC: Progressive Visual Token Compression for Unified Image and Video Processing in Large Vision-Language Models☆45Updated last month
- Code for the paper "Vamba: Understanding Hour-Long Videos with Hybrid Mamba-Transformers" [ICCV 2025]☆78Updated last week
- G1: Bootstrapping Perception and Reasoning Abilities of Vision-Language Model via Reinforcement Learning☆77Updated 2 months ago
- ☆87Updated last month
- Implementation and dataset for paper "Can MLLMs Perform Text-to-Image In-Context Learning?"☆40Updated 2 months ago
- [arXiv: 2502.05178] QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation☆76Updated 5 months ago
- Official code for "What Makes for Good Visual Tokenizers for Large Language Models?".☆58Updated 2 years ago
- Official InfiniBench: A Benchmark for Large Multi-Modal Models in Long-Form Movies and TV Shows☆15Updated last month
- [ICLR 2025] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark☆119Updated 2 months ago
- Codes for ReFocus: Visual Editing as a Chain of Thought for Structured Image Understanding [ICML 2025]]☆37Updated 2 weeks ago
- Quick Long Video Understanding☆60Updated last month
- TemporalBench: Benchmarking Fine-grained Temporal Understanding for Multimodal Video Models☆33Updated 8 months ago
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆36Updated 4 months ago
- ☆52Updated last month
- [NeurIPS 2024] Efficient Large Multi-modal Models via Visual Context Compression☆60Updated 5 months ago
- High-Resolution Visual Reasoning via Multi-Turn Grounding-Based Reinforcement Learning☆44Updated last week