itsqyh / Awesome-LMMs-Mechanistic-InterpretabilityLinks
A curated collection of resources focused on the Mechanistic Interpretability (MI) of Large Multimodal Models (LMMs). This repository aggregates surveys, blog posts, and research papers that explore how LMMs represent, transform, and align multimodal information internally.
☆119Updated 3 weeks ago
Alternatives and similar repositories for Awesome-LMMs-Mechanistic-Interpretability
Users that are interested in Awesome-LMMs-Mechanistic-Interpretability are comparing it to the libraries listed below
Sorting:
- ☆49Updated 9 months ago
- ☆104Updated last month
- Latest Advances on Modality Priors in Multimodal Large Language Models☆22Updated last month
- Interleaving Reasoning: Next-Generation Reasoning Systems for AGI☆131Updated last month
- 😎 A Survey of Efficient Reasoning for Large Reasoning Models: Language, Multimodality, Agent, and Beyond☆286Updated last week
- 关于LLM和Multimodal LLM的paper list☆42Updated this week
- up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resources☆159Updated last month
- Paper List of Inference/Test Time Scaling/Computing☆294Updated last month
- ☆261Updated last month
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆75Updated 8 months ago
- A curated list of resources for activation engineering☆100Updated 3 months ago
- [arXiv 2025] Efficient Reasoning Models: A Survey☆258Updated this week
- ☆49Updated last month
- Code for Reducing Hallucinations in Vision-Language Models via Latent Space Steering☆68Updated 9 months ago
- [CVPR 2024 Highlight] Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding☆307Updated 10 months ago
- This repository contains a regularly updated paper list for LLMs-reasoning-in-latent-space.☆153Updated last week
- ☆96Updated 5 months ago
- ☆51Updated 2 months ago
- Survey on Data-centric Large Language Models☆84Updated last year
- ☆37Updated 2 months ago
- [ICML 2024] Official implementation for "HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding"☆95Updated 8 months ago
- Visualizing the attention of vision-language models☆221Updated 5 months ago
- Chain of Thoughts (CoT) is so hot! so long! We need short reasoning process!☆69Updated 4 months ago
- ☆161Updated 3 months ago
- [ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation☆98Updated 8 months ago
- Papers about Hallucination in Multi-Modal Large Language Models (MLLMs)☆94Updated 9 months ago
- Less is More: High-value Data Selection for Visual Instruction Tuning☆15Updated 7 months ago
- 😎 curated list of awesome LMM hallucinations papers, methods & resources.☆149Updated last year
- 📜 Paper list on decoding methods for LLMs and LVLMs☆55Updated last month
- The reinforcement learning codes for dataset SPA-VL☆36Updated last year