itsqyh / Awesome-LMMs-Mechanistic-InterpretabilityLinks
A curated collection of resources focused on the Mechanistic Interpretability (MI) of Large Multimodal Models (LMMs). This repository aggregates surveys, blog posts, and research papers that explore how LMMs represent, transform, and align multimodal information internally.
☆148Updated last week
Alternatives and similar repositories for Awesome-LMMs-Mechanistic-Interpretability
Users that are interested in Awesome-LMMs-Mechanistic-Interpretability are comparing it to the libraries listed below
Sorting:
- ☆51Updated 11 months ago
- Interleaving Reasoning: Next-Generation Reasoning Systems for AGI☆187Updated last week
- 关于LLM和Multimodal LLM的paper list☆49Updated 3 weeks ago
- Latest Advances on Modality Priors in Multimodal Large Language Models☆25Updated last month
- Paper List of Inference/Test Time Scaling/Computing☆317Updated last month
- ☆109Updated last month
- Code for Reducing Hallucinations in Vision-Language Models via Latent Space Steering☆85Updated 11 months ago
- ☆54Updated 3 months ago
- up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resources☆199Updated 3 weeks ago
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆80Updated 10 months ago
- ☆275Updated 3 months ago
- 😎 A Survey of Efficient Reasoning for Large Reasoning Models: Language, Multimodality, Agent, and Beyond☆308Updated last week
- [CVPR 2024 Highlight] Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding☆328Updated last year
- This repository contains a regularly updated paper list for LLMs-reasoning-in-latent-space.☆177Updated last week
- AlphaEdit: Null-Space Constrained Knowledge Editing for Language Models, ICLR 2025 (Outstanding Paper)☆349Updated last week
- ☆104Updated 7 months ago
- [ICML 2024 Oral] Official code repository for MLLM-as-a-Judge.☆86Updated 8 months ago
- The reinforcement learning codes for dataset SPA-VL☆39Updated last year
- Visualizing the attention of vision-language models☆242Updated 7 months ago
- Papers about Hallucination in Multi-Modal Large Language Models (MLLMs)☆97Updated 11 months ago
- [TMLR 2025] Efficient Reasoning Models: A Survey☆272Updated last week
- [ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation☆118Updated last month
- [NeurIPS 2025] More Thinking, Less Seeing? Assessing Amplified Hallucination in Multimodal Reasoning Models☆63Updated 4 months ago
- ☆174Updated 5 months ago
- A curated list of resources for activation engineering☆107Updated 3 weeks ago
- ☆35Updated last year
- ☆39Updated 4 months ago
- ☆34Updated last month
- ☆30Updated 6 months ago
- [ICML 2024] Official implementation for "HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding"☆100Updated 10 months ago