Baiqi-Li / NaturalBenchLinks
π [NeurIPS24] Make Vision Matter in Visual-Question-Answering (VQA)! Introducing NaturalBench, a vision-centric VQA benchmark (NeurIPS'24) that challenges vision-language models with simple questions about natural imagery.
β89Updated 7 months ago
Alternatives and similar repositories for NaturalBench
Users that are interested in NaturalBench are comparing it to the libraries listed below
Sorting:
- (ECCV 2024) Empowering Multimodal Large Language Model as a Powerful Data Generatorβ114Updated 10 months ago
- Chain-of-Spot: Interactive Reasoning Improves Large Vision-language Modelsβ99Updated last year
- [NAACL 2025 Oral] From redundancy to relevance: Enhancing explainability in multimodal large language modelsβ129Updated this week
- [ECCV 2024] Efficient Inference of Vision Instruction-Following Models with Elastic Cacheβ43Updated last year
- [ICCV 2025] Boosting MLLM Reasoning with Text-Debiased Hint-GRPOβ43Updated 7 months ago
- [CVPR 2023] Official implementation of the paper: Fine-grained Audible Video Descriptionβ75Updated 2 years ago
- Reverse Chain-of-Thought Problem Generation for Geometric Reasoning in Large Multimodal Modelsβ185Updated last year
- u-LLaVA: Unifying Multi-Modal Tasks via Large Language Modelβ134Updated 9 months ago
- The first Interleaved framework for textual reasoning within the visual generation processβ156Updated 2 months ago
- β70Updated 10 months ago
- [ICLR'24] Democratizing Fine-grained Visual Recognition with Large Language Modelsβ189Updated last year
- [ICCV 2025] SparseMM: Head Sparsity Emerges from Visual Concept Responses in MLLMsβ81Updated 2 weeks ago
- (NeurIPS 2024) Official PyTorch implementation of LOVA3β90Updated 10 months ago
- [ICLR 2025] Mathematical Visual Instruction Tuning for Multi-modal Large Language Modelsβ152Updated last year
- [LLaVA-Video-R1]β¨First Adaptation of R1 to LLaVA-Video (2025-03-18)β68Updated 8 months ago
- Your efficient and accurate answer verification system for RL training.β43Updated 7 months ago
- Multi-granularity Correspondence Learning from Long-term Noisy Videos [ICLR 2024, Oral]β118Updated last year
- [ICML 2025] Official repository for paper "Scaling Video-Language Models to 10K Frames via Hierarchical Differential Distillation"β188Updated 4 months ago
- GPT4Vis: What Can GPT-4 Do for Zero-shot Visual Recognition?β187Updated last year
- RobustFT: Robust Supervised Fine-tuning for Large Language Models under Noisy Responseβ42Updated last year
- β¨β¨Long-VITA: Scaling Large Multi-modal Models to 1 Million Tokens with Leading Short-Context Accuracyβ305Updated 8 months ago
- β31Updated last year
- [ECCV 2024] Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems?β176Updated 9 months ago
- Evaluating Vision & Language Pretraining Models with Objects, Attributes and Relations. [EMNLP 2022]β136Updated last year
- [NeurIPS 2025] Hybrid Latent Reasoning via Reinforcement Learningβ175Updated 4 months ago
- [ACL 2023 Findings] FACTUAL dataset, the textual scene graph parser trained on FACTUAL.β123Updated 2 months ago
- CoS: Chain-of-Shot Prompting for Long Video Understandingβ53Updated 11 months ago
- WorldGPT: Empowering LLM as Multimodal World Modelβ125Updated last year
- Multimodal deep-research MLLM and benchmark. The first long-horizon multimodal deep-research MLLM, extending the number of reasoning turnβ¦β184Updated this week
- [MM'24 Oral] Prior Knowledge Integration via LLM Encoding and Pseudo Event Regulation for Video Moment Retrievalβ130Updated last year