itsqyh / Awesome-LMMs-Mechanistic-InterpretabilityLinks
A curated collection of resources focused on the Mechanistic Interpretability (MI) of Large Multimodal Models (LMMs). This repository aggregates surveys, blog posts, and research papers that explore how LMMs represent, transform, and align multimodal information internally.
☆112Updated last week
Alternatives and similar repositories for Awesome-LMMs-Mechanistic-Interpretability
Users that are interested in Awesome-LMMs-Mechanistic-Interpretability are comparing it to the libraries listed below
Sorting:
- ☆49Updated 8 months ago
- ☆103Updated 3 weeks ago
- Latest Advances on Modality Priors in Multimodal Large Language Models☆22Updated 3 weeks ago
- ☆252Updated last month
- ☆47Updated 3 weeks ago
- ☆155Updated 2 months ago
- 😎 A Survey of Efficient Reasoning for Large Reasoning Models: Language, Multimodality, and Beyond☆277Updated last month
- Interleaving Reasoning: Next-Generation Reasoning Systems for AGI☆105Updated 3 weeks ago
- Papers about Hallucination in Multi-Modal Large Language Models (MLLMs)☆94Updated 8 months ago
- [ICML 2024 Oral] Official code repository for MLLM-as-a-Judge.☆77Updated 5 months ago
- up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resources☆152Updated last week
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆71Updated 7 months ago
- ☆95Updated 4 months ago
- Paper List of Inference/Test Time Scaling/Computing☆286Updated last month
- 关于LLM和Multimodal LLM的paper list☆42Updated last month
- Code for Reducing Hallucinations in Vision-Language Models via Latent Space Steering☆66Updated 8 months ago
- [arXiv 2025] Efficient Reasoning Models: A Survey☆247Updated 2 weeks ago
- AlphaEdit: Null-Space Constrained Knowledge Editing for Language Models, ICLR 2025 (Outstanding Paper)☆293Updated 3 weeks ago
- [ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation☆96Updated 7 months ago
- A curated list of resources for activation engineering☆99Updated 2 months ago
- A versatile toolkit for applying Logit Lens to modern large language models (LLMs). Currently supports Llama-3.1-8B and Qwen-2.5-7B, enab…☆95Updated 5 months ago
- [CVPR 2024 Highlight] Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding☆302Updated 9 months ago
- ☆52Updated last month
- The reinforcement learning codes for dataset SPA-VL☆36Updated last year
- Chain of Thoughts (CoT) is so hot! so long! We need short reasoning process!☆68Updated 4 months ago
- [ICML 2024] Official implementation for "HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding"☆92Updated 8 months ago
- 😎 curated list of awesome LMM hallucinations papers, methods & resources.☆149Updated last year
- This repository contains a regularly updated paper list for LLMs-reasoning-in-latent-space.☆142Updated 2 weeks ago
- ☆61Updated 9 months ago
- Survey on Data-centric Large Language Models☆84Updated last year