itsqyh / Awesome-LMMs-Mechanistic-InterpretabilityLinks
A curated collection of resources focused on the Mechanistic Interpretability (MI) of Large Multimodal Models (LMMs). This repository aggregates surveys, blog posts, and research papers that explore how LMMs represent, transform, and align multimodal information internally.
☆173Updated 2 months ago
Alternatives and similar repositories for Awesome-LMMs-Mechanistic-Interpretability
Users that are interested in Awesome-LMMs-Mechanistic-Interpretability are comparing it to the libraries listed below
Sorting:
- ☆55Updated last year
- up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resources☆236Updated 2 months ago
- ☆111Updated 3 months ago
- 关于LLM和Multimodal LLM的paper list☆50Updated last week
- Interleaving Reasoning: Next-Generation Reasoning Systems for AGI☆228Updated 2 months ago
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆92Updated last year
- This repository contains a regularly updated paper list for LLMs-reasoning-in-latent-space.☆242Updated this week
- 😎 A Survey of Efficient Reasoning for Large Reasoning Models: Language, Multimodality, Agent, and Beyond☆321Updated 2 months ago
- ☆59Updated 5 months ago
- Code for Reducing Hallucinations in Vision-Language Models via Latent Space Steering☆98Updated last year
- ☆294Updated 5 months ago
- ☆197Updated this week
- [TMLR 2025] Efficient Reasoning Models: A Survey☆285Updated last month
- Paper List of Inference/Test Time Scaling/Computing☆335Updated 4 months ago
- Latest Advances on Modality Priors in Multimodal Large Language Models☆29Updated 2 weeks ago
- A paper list of Awesome Latent Space.☆251Updated this week
- A curated list of resources for activation engineering☆119Updated 2 months ago
- Chain of Thoughts (CoT) is so hot! so long! We need short reasoning process!☆71Updated 8 months ago
- [CVPR 2024 Highlight] Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding☆364Updated last year
- Papers about Hallucination in Multi-Modal Large Language Models (MLLMs)☆98Updated last year
- [ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation☆127Updated 3 months ago
- [ICML 2024 Oral] Official code repository for MLLM-as-a-Judge.☆86Updated 10 months ago
- Code for "The Devil behind the mask: An emergent safety vulnerability of Diffusion LLMs"☆73Updated 2 months ago
- 📜 Paper list on decoding methods for LLMs and LVLMs☆68Updated last month
- [NeurIPS 2025] More Thinking, Less Seeing? Assessing Amplified Hallucination in Multimodal Reasoning Models☆73Updated 6 months ago
- Visualizing the attention of vision-language models☆268Updated 10 months ago
- ☆66Updated 5 months ago
- [NAACL 2025 Main] Official Implementation of MLLMU-Bench☆43Updated 9 months ago
- ☆76Updated last year
- 🔥An open-source survey of the latest video reasoning tasks, paradigms, and benchmarks.☆108Updated last week