itsqyh / Awesome-LMMs-Mechanistic-Interpretability
☆33Updated last month
Alternatives and similar repositories for Awesome-LMMs-Mechanistic-Interpretability:
Users that are interested in Awesome-LMMs-Mechanistic-Interpretability are comparing it to the libraries listed below
- Latest Advances on Modality Priors in Multimodal Large Language Models☆12Updated 2 weeks ago
- FeatureAlignment = Alignment + Mechanistic Interpretability☆28Updated last month
- A versatile toolkit for applying Logit Lens to modern large language models (LLMs). Currently supports Llama-3.1-8B and Qwen-2.5-7B, enab…☆73Updated 2 months ago
- awesome SAE papers☆26Updated last month
- [ICML 2024] Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibrati…☆36Updated 9 months ago
- This repository contains a regularly updated paper list for LLMs-reasoning-in-latent-space.☆69Updated last week
- code for EMNLP 2024 paper: Neuron-Level Knowledge Attribution in Large Language Models☆30Updated 5 months ago
- A Survey on the Honesty of Large Language Models☆57Updated 4 months ago
- ☆23Updated last month
- A curated list of resources for activation engineering☆59Updated last week
- ☆48Updated 4 months ago
- ☆26Updated 5 months ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆57Updated last year
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆87Updated 9 months ago
- ☆31Updated 6 months ago
- ☆50Updated this week
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆42Updated 3 months ago
- ☆10Updated last month
- The reinforcement learning codes for dataset SPA-VL☆32Updated 9 months ago
- Chain of Thoughts (CoT) is so hot! so long! We need short reasoning process!☆48Updated 2 weeks ago
- The First to Know: How Token Distributions Reveal Hidden Knowledge in Large Vision-Language Models?☆28Updated 5 months ago
- [ICLR 2025] Official codebase for the ICLR 2025 paper "Multimodal Situational Safety"☆13Updated last month
- ☆73Updated 3 weeks ago
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆76Updated 2 weeks ago
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆14Updated last week
- Collection of Reverse Engineering in Large Model☆32Updated 3 months ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆71Updated last month
- [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.☆71Updated 5 months ago
- ☆85Updated this week
- ☆17Updated 2 months ago