Xiuyuan-Chen / AutoEval-Video
☆35Updated last year
Alternatives and similar repositories for AutoEval-Video:
Users that are interested in AutoEval-Video are comparing it to the libraries listed below
- ☆51Updated last year
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆42Updated 9 months ago
- LAVIS - A One-stop Library for Language-Vision Intelligence☆47Updated 8 months ago
- ☆73Updated 3 months ago
- NoisyRollout: Reinforcing Visual Reasoning with Data Augmentation☆46Updated this week
- [ArXiv] V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding☆44Updated 4 months ago
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMs☆47Updated last month
- Code for our Paper "All in an Aggregated Image for In-Image Learning"☆30Updated last year
- [EMNLP 2023] TESTA: Temporal-Spatial Token Aggregation for Long-form Video-Language Understanding☆49Updated last year
- Preference Learning for LLaVA☆43Updated 5 months ago
- ☆63Updated last year
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆28Updated 3 weeks ago
- [ACL 2024 Findings] "TempCompass: Do Video LLMs Really Understand Videos?", Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, …☆111Updated 3 weeks ago
- MM-Instruct: Generated Visual Instructions for Large Multimodal Model Alignment☆34Updated 9 months ago
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆82Updated last year
- ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration☆30Updated 3 months ago
- ☆54Updated last year
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆76Updated 2 months ago
- ☆40Updated 3 months ago
- This repo contains evaluation code for the paper "AV-Odyssey: Can Your Multimodal LLMs Really Understand Audio-Visual Information?"☆23Updated 4 months ago
- The official code of "VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning"☆61Updated this week
- An benchmark for evaluating the capabilities of large vision-language models (LVLMs)☆46Updated last year
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection☆74Updated 2 weeks ago
- [ACL 2024] PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chain☆102Updated last year
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆57Updated 7 months ago
- LMM solved catastrophic forgetting, AAAI2025☆40Updated last week
- Language Repository for Long Video Understanding☆31Updated 10 months ago
- ☆25Updated 9 months ago
- ☆91Updated last year
- ACL'24 (Oral) Tuning Large Multimodal Models for Videos using Reinforcement Learning from AI Feedback☆64Updated 7 months ago