compling-wat / vlm-lensLinks
[EMNLP 2025 Demo] Extracting internal representations from vision-language models. Beta version.
☆102Updated 2 months ago
Alternatives and similar repositories for vlm-lens
Users that are interested in vlm-lens are comparing it to the libraries listed below
Sorting:
- Machine Mental Imagery: Empower Multimodal Reasoning with Latent Visual Tokens (arXiv 2025)☆233Updated 5 months ago
- ☆114Updated 6 months ago
- 🔥 [ICLR 2025] Official PyTorch Model "Visual Haystacks: A Vision-Centric Needle-In-A-Haystack Benchmark"☆26Updated 11 months ago
- We introduce BabyVision, a benchmark revealing the infancy of AI vision.☆162Updated 2 weeks ago
- Github repository for "Why Is Spatial Reasoning Hard for VLMs? An Attention Mechanism Perspective on Focus Areas" (ICML 2025)☆68Updated 8 months ago
- https://huggingface.co/datasets/multimodal-reasoning-lab/Zebra-CoT☆117Updated 2 months ago
- A collection of awesome think with videos papers.☆83Updated last month
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆158Updated 4 months ago
- Official implementation of "Automated Generation of Challenging Multiple-Choice Questions for Vision Language Model Evaluation" (CVPR 202…☆41Updated 8 months ago
- [NeurIPS 2024] Official Repository of Multi-Object Hallucination in Vision-Language Models☆33Updated last year
- ☆62Updated 2 months ago
- [ICCV 2025] Auto Interpretation Pipeline and many other functionalities for Multimodal SAE Analysis.☆174Updated 4 months ago
- Github repository for "Bring Reason to Vision: Understanding Perception and Reasoning through Model Merging" (ICML 2025)☆88Updated 4 months ago
- Thinking with Videos from Open-Source Priors. We reproduce chain-of-frames visual reasoning by fine-tuning open-source video models. Give…☆206Updated 3 months ago
- [ICLR 2025] Video-STaR: Self-Training Enables Video Instruction Tuning with Any Supervision☆72Updated last year
- The official code of "VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning" [NeurIPS25]☆179Updated 7 months ago
- ☆80Updated 7 months ago
- Data and Code for CVPR 2025 paper "MMVU: Measuring Expert-Level Multi-Discipline Video Understanding"☆78Updated 11 months ago
- 🔥 [NeurIPS 2025] Official implementation of "Generate, but Verify: Reducing Visual Hallucination in Vision-Language Models with Retrospe…☆51Updated last week
- The official repository for the paper "ThinkMorph: Emergent Properties in Multimodal Interleaved Chain-of-Thought Reasoning"☆140Updated 3 weeks ago
- ☆68Updated 4 months ago
- TStar is a unified temporal search framework for long-form video question answering☆86Updated 4 months ago
- [NeurIPS 2025] Think or Not? Selective Reasoning via Reinforcement Learning for Vision-Language Models☆53Updated 4 months ago
- Holistic Evaluation of Multimodal LLMs on Spatial Intelligence☆74Updated last week
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆84Updated 3 months ago
- [NeurIPS'24] SpatialEval: a benchmark to evaluate spatial reasoning abilities of MLLMs and LLMs☆58Updated last year
- PyTorch implementation of NEPA☆296Updated last month
- Official codes of "Monet: Reasoning in Latent Visual Space Beyond Image and Language"☆119Updated last month
- Code for MetaMorph Multimodal Understanding and Generation via Instruction Tuning☆232Updated last week
- Evaluating Knowledge Acquisition from Multi-Discipline Professional Videos☆63Updated 4 months ago