itsqyh / Awesome-LMMs-Mechanistic-InterpretabilityLinks
A curated collection of resources focused on the Mechanistic Interpretability (MI) of Large Multimodal Models (LMMs). This repository aggregates surveys, blog posts, and research papers that explore how LMMs represent, transform, and align multimodal information internally.
☆100Updated 3 weeks ago
Alternatives and similar repositories for Awesome-LMMs-Mechanistic-Interpretability
Users that are interested in Awesome-LMMs-Mechanistic-Interpretability are comparing it to the libraries listed below
Sorting:
- ☆47Updated 7 months ago
- Latest Advances on Modality Priors in Multimodal Large Language Models☆21Updated 2 months ago
- ☆102Updated last week
- ☆36Updated last week
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆68Updated 6 months ago
- ☆236Updated last week
- Interleaving Reasoning: Next-Generation Reasoning Systems for AGI☆84Updated last week
- This repository contains a regularly updated paper list for LLMs-reasoning-in-latent-space.☆127Updated 3 weeks ago
- Code for Reducing Hallucinations in Vision-Language Models via Latent Space Steering☆63Updated 7 months ago
- 😎 A Survey of Efficient Reasoning for Large Reasoning Models: Language, Multimodality, and Beyond☆263Updated last week
- 😎 curated list of awesome LMM hallucinations papers, methods & resources.☆149Updated last year
- A versatile toolkit for applying Logit Lens to modern large language models (LLMs). Currently supports Llama-3.1-8B and Qwen-2.5-7B, enab…☆92Updated 4 months ago
- [CVPR 2024 Highlight] Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding☆293Updated 9 months ago
- Papers about Hallucination in Multi-Modal Large Language Models (MLLMs)☆92Updated 7 months ago
- FeatureAlignment = Alignment + Mechanistic Interpretability☆28Updated 4 months ago
- ☆51Updated last month
- 关于LLM和Multimodal LLM的paper list☆41Updated 2 weeks ago
- up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resources☆143Updated 2 months ago
- [arXiv 2025] Efficient Reasoning Models: A Survey☆227Updated this week
- Paper List of Inference/Test Time Scaling/Computing☆275Updated 2 weeks ago
- [ICML 2024 Oral] Official code repository for MLLM-as-a-Judge.☆74Updated 4 months ago
- Chain of Thoughts (CoT) is so hot! so long! We need short reasoning process!☆54Updated 3 months ago
- ☆147Updated 2 months ago
- [ICML 2024] Official implementation for "HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding"☆88Updated 7 months ago
- 📜 Paper list on decoding methods for LLMs and LVLMs☆52Updated 2 weeks ago
- The reinforcement learning codes for dataset SPA-VL☆36Updated last year
- [ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation☆89Updated 7 months ago
- A curated list of resources for activation engineering☆91Updated last month
- ☆88Updated 3 months ago
- ☆33Updated 9 months ago