itsqyh / Awesome-LMMs-Mechanistic-InterpretabilityLinks
A curated collection of resources focused on the Mechanistic Interpretability (MI) of Large Multimodal Models (LMMs). This repository aggregates surveys, blog posts, and research papers that explore how LMMs represent, transform, and align multimodal information internally.
☆178Updated 2 months ago
Alternatives and similar repositories for Awesome-LMMs-Mechanistic-Interpretability
Users that are interested in Awesome-LMMs-Mechanistic-Interpretability are comparing it to the libraries listed below
Sorting:
- ☆112Updated 4 months ago
- ☆55Updated last year
- 关于LLM和Multimodal LLM的paper list☆53Updated this week
- Interleaving Reasoning: Next-Generation Reasoning Systems for AGI☆236Updated 3 months ago
- up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resources☆245Updated 3 months ago
- This repository contains a regularly updated paper list for LLMs-reasoning-in-latent-space.☆261Updated last week
- Latest Advances on Modality Priors in Multimodal Large Language Models☆29Updated last month
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆93Updated last year
- Code for Reducing Hallucinations in Vision-Language Models via Latent Space Steering☆100Updated last year
- ☆60Updated 6 months ago
- 🔥An open-source survey of the latest video reasoning tasks, paradigms, and benchmarks.☆125Updated 2 weeks ago
- Paper List of Inference/Test Time Scaling/Computing☆339Updated 4 months ago
- ☆299Updated 6 months ago
- 😎 A Survey of Efficient Reasoning for Large Reasoning Models: Language, Multimodality, Agent, and Beyond☆330Updated 2 weeks ago
- [ICML 2024 Oral] Official code repository for MLLM-as-a-Judge.☆89Updated 11 months ago
- A paper list of Awesome Latent Space.☆289Updated last week
- Chain of Thoughts (CoT) is so hot! so long! We need short reasoning process!☆72Updated 9 months ago
- [TMLR 2025] Efficient Reasoning Models: A Survey☆292Updated 2 weeks ago
- ☆204Updated 3 weeks ago
- Papers about Hallucination in Multi-Modal Large Language Models (MLLMs)☆99Updated last year
- Visualizing the attention of vision-language models☆272Updated 10 months ago
- [ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation☆129Updated 4 months ago
- ☆77Updated last year
- [CVPR 2024 Highlight] Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding☆368Updated last year
- Imagine While Reasoning in Space: Multimodal Visualization-of-Thought (ICML 2025)☆63Updated 9 months ago
- Code for "The Devil behind the mask: An emergent safety vulnerability of Diffusion LLMs"☆73Updated 3 months ago
- Survey on Data-centric Large Language Models☆88Updated last year
- [NeurIPS 2025] More Thinking, Less Seeing? Assessing Amplified Hallucination in Multimodal Reasoning Models☆73Updated 7 months ago
- FeatureAlignment = Alignment + Mechanistic Interpretability☆34Updated 10 months ago
- [ICML 2024] Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibrati…☆46Updated last year