Baiqi-Li / NaturalBench
π [NeurIPS24] Make Vision Matter in Visual-Question-Answering (VQA)! Introducing NaturalBench, a vision-centric VQA benchmark (NeurIPS'24) that challenges vision-language models with simple questions about natural imagery.
β80Updated last week
Alternatives and similar repositories for NaturalBench:
Users that are interested in NaturalBench are comparing it to the libraries listed below
- [NAACL 2025 Oral] π From redundancy to relevance: Enhancing explainability in multimodal large language modelsβ93Updated 2 months ago
- (ECCV 2024) Empowering Multimodal Large Language Model as a Powerful Data Generatorβ107Updated 3 weeks ago
- Chain-of-Spot: Interactive Reasoning Improves Large Vision-language Modelsβ92Updated last year
- [ECCV 2024] Efficient Inference of Vision Instruction-Following Models with Elastic Cacheβ42Updated 8 months ago
- [ICLR 2025] Mathematical Visual Instruction Tuning for Multi-modal Large Language Modelsβ137Updated 4 months ago
- Reverse Chain-of-Thought Problem Generation for Geometric Reasoning in Large Multimodal Modelsβ170Updated 5 months ago
- (NeurIPS 2024) Official PyTorch implementation of LOVA3β82Updated 3 weeks ago
- [ECCV 2024] Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems?β159Updated 6 months ago
- [CVPR 2023] Official implementation of the paper: Fine-grained Audible Video Descriptionβ72Updated last year
- u-LLaVA: Unifying Multi-Modal Tasks via Large Language Modelβ131Updated last week
- β64Updated last month
- FACTUAL benchmark dataset, the pre-trained textual scene graph parser trained on FACTUAL.β105Updated last week
- RobustFT: Robust Supervised Fine-tuning for Large Language Models under Noisy Responseβ40Updated 3 months ago
- WorldGPT: Empowering LLM as Multimodal World Modelβ114Updated 8 months ago
- [ICLR'24] Democratizing Fine-grained Visual Recognition with Large Language Modelsβ175Updated 9 months ago
- Multi-granularity Correspondence Learning from Long-term Noisy Videos [ICLR 2024, Oral]β113Updated last year
- Evaluating Vision & Language Pretraining Models with Objects, Attributes and Relations. [EMNLP 2022]β129Updated 6 months ago
- A post-training method to enhance CLIP's fine-grained visual representations with generative models.β47Updated 3 weeks ago
- A collection of multimodal reasoning papers, codes, datasets, benchmarks and resources.β170Updated last week
- [ICLR 2025] BiGR: Harnessing Binary Latent Codes for Image Generation and Improved Visual Representation Capabilitiesβ140Updated 2 months ago
- The repository for the paper titled "Leopard: A Vision Language Model For Text-Rich Multi-Image Tasks"β155Updated 3 months ago
- GPT4Vis: What Can GPT-4 Do for Zero-shot Visual Recognition?β186Updated 10 months ago
- β28Updated 5 months ago
- [MM'24 Oral] Prior Knowledge Integration via LLM Encoding and Pseudo Event Regulation for Video Moment Retrievalβ123Updated 7 months ago
- SemiEvol: Semi-supervised Fine-tuning for LLM Adaptationβ52Updated 4 months ago
- Official implementation of X-Prompt: Towards Universal In-Context Image Generation in Auto-Regressive Vision Language Foundation Modelsβ153Updated 4 months ago
- An official implementation of VideoRoPE: What Makes for Good Video Rotary Position Embedding?β124Updated last week
- [AAAI 2025] Code for paper:Enhancing Multimodal Large Language Models Complex Reasoning via Similarity Computationβ2Updated 3 months ago
- β¨β¨Long-VITA: Scaling Large Multi-modal Models to 1 Million Tokens with Leading Short-Context Accuracyβ271Updated 3 weeks ago
- A curated list of awesome papers on the platonic representation hypothesis.β20Updated this week