compling-wat / vlm-lensLinks
[EMNLP 2025 Demo] Extracting internal representations from vision-language models. Beta version.
☆80Updated last month
Alternatives and similar repositories for vlm-lens
Users that are interested in vlm-lens are comparing it to the libraries listed below
Sorting:
- Machine Mental Imagery: Empower Multimodal Reasoning with Latent Visual Tokens (arXiv 2025)☆222Updated 5 months ago
- ☆112Updated 5 months ago
- ☆60Updated last month
- 🔥 [ICLR 2025] Official PyTorch Model "Visual Haystacks: A Vision-Centric Needle-In-A-Haystack Benchmark"☆26Updated 11 months ago
- [ICLR 2025] Video-STaR: Self-Training Enables Video Instruction Tuning with Any Supervision☆72Updated last year
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆153Updated 3 months ago
- Thinking with Videos from Open-Source Priors. We reproduce chain-of-frames visual reasoning by fine-tuning open-source video models. Give…☆202Updated 2 months ago
- [ICLR 2025] Source code for paper "A Spark of Vision-Language Intelligence: 2-Dimensional Autoregressive Transformer for Efficient Finegr…☆79Updated last year
- Uni-CoT: Towards Unified Chain-of-Thought Reasoning Across Text and Vision☆192Updated 2 weeks ago
- [NeurIPS 2024] Official Repository of Multi-Object Hallucination in Vision-Language Models☆33Updated last year
- Github repository for "Why Is Spatial Reasoning Hard for VLMs? An Attention Mechanism Perspective on Focus Areas" (ICML 2025)☆67Updated 8 months ago
- Official Implementation of LaViDa: :A Large Diffusion Language Model for Multimodal Understanding☆186Updated 3 weeks ago
- We introduce 'Thinking with Video', a new paradigm leveraging video generation for multimodal reasoning. Our VideoThinkBench shows that S…☆234Updated this week
- The official code of "VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning" [NeurIPS25]☆174Updated 7 months ago
- https://huggingface.co/datasets/multimodal-reasoning-lab/Zebra-CoT☆112Updated 2 months ago
- Visual Planning: Let's Think Only with Images☆290Updated 7 months ago
- [ICCV 2025] Auto Interpretation Pipeline and many other functionalities for Multimodal SAE Analysis.☆172Updated 3 months ago
- Code for MetaMorph Multimodal Understanding and Generation via Instruction Tuning☆231Updated 8 months ago
- Ego-R1: Chain-of-Tool-Thought for Ultra-Long Egocentric Video Reasoning☆134Updated 4 months ago
- Official codes of "Monet: Reasoning in Latent Visual Space Beyond Image and Language"☆100Updated last week
- TStar is a unified temporal search framework for long-form video question answering☆84Updated 4 months ago
- ☆302Updated 3 weeks ago
- The official repository for the paper "ThinkMorph: Emergent Properties in Multimodal Interleaved Chain-of-Thought Reasoning"☆136Updated 2 weeks ago
- Code and datasets for "What’s “up” with vision-language models? Investigating their struggle with spatial reasoning".☆68Updated last year
- ☆96Updated 6 months ago
- Official repository of 'ScaleCap: Inference-Time Scalable Image Captioning via Dual-Modality Debiasing’☆58Updated 6 months ago
- ☆68Updated 3 months ago
- ☆80Updated 6 months ago
- Data and Code for CVPR 2025 paper "MMVU: Measuring Expert-Level Multi-Discipline Video Understanding"☆77Updated 10 months ago
- [CVPR'2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆203Updated 6 months ago