ChnQ / MI-PeaksLinks
☆36Updated last week
Alternatives and similar repositories for MI-Peaks
Users that are interested in MI-Peaks are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆68Updated 6 months ago
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆30Updated 3 months ago
- A curated collection of resources focused on the Mechanistic Interpretability (MI) of Large Multimodal Models (LMMs). This repository agg…☆100Updated 3 weeks ago
- ☆33Updated 9 months ago
- ☆236Updated last week
- Official Repository of "Learning what reinforcement learning can't"☆42Updated this week
- [ICML'25] Our study systematically investigates massive values in LLMs' attention mechanisms. First, we observe massive values are concen…☆74Updated 3 weeks ago
- ☆147Updated last month
- [ICML 2025] M-STAR (Multimodal Self-Evolving TrAining for Reasoning) Project. Diving into Self-Evolving Training for Multimodal Reasoning☆61Updated 6 months ago
- A versatile toolkit for applying Logit Lens to modern large language models (LLMs). Currently supports Llama-3.1-8B and Qwen-2.5-7B, enab…☆92Updated 4 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆76Updated 5 months ago
- This repository contains a regularly updated paper list for LLMs-reasoning-in-latent-space.☆127Updated 2 weeks ago
- [NeurIPS 2024] "Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?"☆35Updated 5 months ago
- RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024☆77Updated 9 months ago
- [2025-TMLR] A Survey on the Honesty of Large Language Models☆58Updated 7 months ago
- Awesome-Efficient-Inference-for-LRMs is a collection of state-of-the-art, novel, exciting, token-efficient methods for Large Reasoning Mo…☆76Updated last month
- The reinforcement learning codes for dataset SPA-VL☆36Updated last year
- 😎 A Survey of Efficient Reasoning for Large Reasoning Models: Language, Multimodality, and Beyond☆263Updated last week
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆59Updated last year
- A curated list of resources for activation engineering☆91Updated last month
- Official Code and data for ACL 2024 finding, "An Empirical Study on Parameter-Efficient Fine-Tuning for MultiModal Large Language Models"☆19Updated 8 months ago
- FeatureAlignment = Alignment + Mechanistic Interpretability☆28Updated 4 months ago
- A Sober Look at Language Model Reasoning☆75Updated 3 weeks ago
- 📜 Paper list on decoding methods for LLMs and LVLMs☆52Updated 2 weeks ago
- ☆65Updated 3 months ago
- This repo contains the code for the paper "Understanding and Mitigating Hallucinations in Large Vision-Language Models via Modular Attrib…☆19Updated 4 months ago
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆80Updated 3 months ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆59Updated 4 months ago
- Laser: Learn to Reason Efficiently with Adaptive Length-based Reward Shaping☆49Updated last month
- ☆47Updated 7 months ago