itsqyh / Awesome-LMMs-Mechanistic-InterpretabilityLinks
A curated collection of resources focused on the Mechanistic Interpretability (MI) of Large Multimodal Models (LMMs). This repository aggregates surveys, blog posts, and research papers that explore how LMMs represent, transform, and align multimodal information internally.
☆89Updated last week
Alternatives and similar repositories for Awesome-LMMs-Mechanistic-Interpretability
Users that are interested in Awesome-LMMs-Mechanistic-Interpretability are comparing it to the libraries listed below
Sorting:
- Latest Advances on Modality Priors in Multimodal Large Language Models☆20Updated last month
- ☆47Updated 6 months ago
- ☆101Updated this week
- [ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation☆86Updated 6 months ago
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆64Updated 6 months ago
- Code for Reducing Hallucinations in Vision-Language Models via Latent Space Steering☆60Updated 7 months ago
- Papers about Hallucination in Multi-Modal Large Language Models (MLLMs)☆91Updated 7 months ago
- A versatile toolkit for applying Logit Lens to modern large language models (LLMs). Currently supports Llama-3.1-8B and Qwen-2.5-7B, enab…☆89Updated 4 months ago
- Chain of Thoughts (CoT) is so hot! so long! We need short reasoning process!☆54Updated 2 months ago
- ☆83Updated 3 months ago
- 😎 curated list of awesome LMM hallucinations papers, methods & resources.☆149Updated last year
- up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resources☆138Updated last month
- awesome SAE papers☆35Updated last month
- 关于LLM和Multimodal LLM的paper list☆42Updated 2 weeks ago
- FeatureAlignment = Alignment + Mechanistic Interpretability☆28Updated 3 months ago
- [ICML 2024] Official implementation for "HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding"☆88Updated 6 months ago
- [ICML 2024 Oral] Official code repository for MLLM-as-a-Judge.☆70Updated 4 months ago
- ☆139Updated last month
- [CVPR 2024 Highlight] Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding☆285Updated 8 months ago
- This repository contains a regularly updated paper list for LLMs-reasoning-in-latent-space.☆120Updated this week
- ☆57Updated 7 months ago
- This repository will continuously update the latest papers, technical reports, benchmarks about multimodal reasoning!☆45Updated 3 months ago
- The reinforcement learning codes for dataset SPA-VL☆34Updated last year
- A curated list of resources for activation engineering☆90Updated 3 weeks ago
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)☆51Updated 7 months ago
- [ICML 2024] Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibrati…☆41Updated 11 months ago
- ☆222Updated this week
- Official Pytorch implementation of "Interpreting and Editing Vision-Language Representations to Mitigate Hallucinations" (ICLR '25)☆75Updated 3 weeks ago
- ☆74Updated last year
- Visualizing the attention of vision-language models☆188Updated 3 months ago