HenryZhen97 / Reconsidering-OverthinkingLinks
☆20Updated 4 months ago
Alternatives and similar repositories for Reconsidering-Overthinking
Users that are interested in Reconsidering-Overthinking are comparing it to the libraries listed below
Sorting:
- 关于LLM和Multimodal LLM的paper list☆52Updated last week
- ☆55Updated last year
- ☆299Updated 6 months ago
- up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resources☆241Updated 3 months ago
- Code for Reducing Hallucinations in Vision-Language Models via Latent Space Steering☆99Updated last year
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆93Updated last year
- Less is More: High-value Data Selection for Visual Instruction Tuning☆17Updated 11 months ago
- Papers about Hallucination in Multi-Modal Large Language Models (MLLMs)☆99Updated last year
- A curated collection of resources focused on the Mechanistic Interpretability (MI) of Large Multimodal Models (LMMs). This repository agg…☆173Updated 2 months ago
- This repository contains a regularly updated paper list for LLMs-reasoning-in-latent-space.☆253Updated last week
- Latest Advances on Modality Priors in Multimodal Large Language Models☆29Updated last month
- [ICML 2024] Official implementation for "HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding"☆106Updated last year
- [ECCV 2024] The official code for "AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shi…☆68Updated last year
- ☆112Updated 4 months ago
- [ICML 2024 Oral] Official code repository for MLLM-as-a-Judge.☆87Updated 10 months ago
- ☆67Updated 5 months ago
- [ICLR 2025] Mitigating Modality Prior-Induced Hallucinations in Multimodal Large Language Models via Deciphering Attention Causality☆60Updated 6 months ago
- [ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation☆127Updated 4 months ago
- Official repository for "Safety in Large Reasoning Models: A Survey" - Exploring safety risks, attacks, and defenses for Large Reasoning …☆83Updated 4 months ago
- Interleaving Reasoning: Next-Generation Reasoning Systems for AGI☆234Updated 2 months ago
- Official github repo for SafeDialBench, a comprehensive multi-turn dialogue benchmark to evaluate LLMs' safety.☆38Updated 8 months ago
- [AAAI'26 Oral] Official Implementation of STAR-1: Safer Alignment of Reasoning LLMs with 1K Data☆33Updated 9 months ago
- Official codebase for "STAIR: Improving Safety Alignment with Introspective Reasoning"☆86Updated 10 months ago
- An implementation for MLLM oversensitivity evaluation☆17Updated last year
- [NeurIPS 2025] More Thinking, Less Seeing? Assessing Amplified Hallucination in Multimodal Reasoning Models☆73Updated 7 months ago
- [CVPR' 25] Interleaved-Modal Chain-of-Thought☆103Updated last week
- ☆60Updated 5 months ago
- ☆57Updated 7 months ago
- [CVPR 2024 Highlight] Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding☆365Updated last year
- [NeurIPS25 Spotlight] EMPO, A Fully Unsupervised RLVR Method☆90Updated last month