itsqyh / Awesome-LMMs-Mechanistic-InterpretabilityLinks
A curated collection of resources focused on the Mechanistic Interpretability (MI) of Large Multimodal Models (LMMs). This repository aggregates surveys, blog posts, and research papers that explore how LMMs represent, transform, and align multimodal information internally.
☆166Updated last month
Alternatives and similar repositories for Awesome-LMMs-Mechanistic-Interpretability
Users that are interested in Awesome-LMMs-Mechanistic-Interpretability are comparing it to the libraries listed below
Sorting:
- ☆54Updated last year
- 关于LLM和Multimodal LLM的paper list☆50Updated this week
- up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resources☆227Updated 2 months ago
- ☆110Updated 2 months ago
- Interleaving Reasoning: Next-Generation Reasoning Systems for AGI☆213Updated last month
- Latest Advances on Modality Priors in Multimodal Large Language Models☆28Updated 2 months ago
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆88Updated 11 months ago
- ☆57Updated 4 months ago
- A paper list of Awesome Latent Space.☆123Updated this week
- ☆290Updated 5 months ago
- This repository contains a regularly updated paper list for LLMs-reasoning-in-latent-space.☆215Updated this week
- 😎 A Survey of Efficient Reasoning for Large Reasoning Models: Language, Multimodality, Agent, and Beyond☆318Updated last month
- Paper List of Inference/Test Time Scaling/Computing☆326Updated 3 months ago
- ☆185Updated 6 months ago
- [CVPR 2024 Highlight] Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding☆349Updated last year
- 📜 Paper list on decoding methods for LLMs and LVLMs☆66Updated last month
- Code for Reducing Hallucinations in Vision-Language Models via Latent Space Steering☆92Updated last year
- Papers about Hallucination in Multi-Modal Large Language Models (MLLMs)☆98Updated last year
- [TMLR 2025] Efficient Reasoning Models: A Survey☆282Updated last month
- ☆108Updated 8 months ago
- AlphaEdit: Null-Space Constrained Knowledge Editing for Language Models, ICLR 2025 (Outstanding Paper)☆376Updated last month
- [ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation☆122Updated 2 months ago
- [ICML'25] Our study systematically investigates massive values in LLMs' attention mechanisms. First, we observe massive values are concen…☆86Updated 5 months ago
- [NeurIPS 2025] More Thinking, Less Seeing? Assessing Amplified Hallucination in Multimodal Reasoning Models☆73Updated 6 months ago
- A curated list of resources for activation engineering☆114Updated 2 months ago
- A versatile toolkit for applying Logit Lens to modern large language models (LLMs). Currently supports Llama-3.1-8B and Qwen-2.5-7B, enab…☆132Updated 3 months ago
- Chain of Thoughts (CoT) is so hot! so long! We need short reasoning process!☆70Updated 8 months ago
- [ICML 2024 Oral] Official code repository for MLLM-as-a-Judge.☆86Updated 9 months ago
- Survey on Data-centric Large Language Models☆88Updated last year
- FeatureAlignment = Alignment + Mechanistic Interpretability☆33Updated 9 months ago