zengxingchen / LLM-Visualization-Paper-ListLinks
Awesome-Paper-list: Visualization meets LLM
☆46Updated last month
Alternatives and similar repositories for LLM-Visualization-Paper-List
Users that are interested in LLM-Visualization-Paper-List are comparing it to the libraries listed below
Sorting:
- Interleaving Reasoning: Next-Generation Reasoning Systems for AGI☆155Updated last week
- [ICLR 2025] ChartMimic: Evaluating LMM’s Cross-Modal Reasoning Capability via Chart-to-Code Generation☆123Updated 3 months ago
- A benchmark designed to evaluate visualization generation methods.☆46Updated 2 months ago
- [ICLR2025 Oral] ChartMoE: Mixture of Diversely Aligned Expert Connector for Chart Understanding☆88Updated 5 months ago
- VLM2-Bench [ACL 2025 Main]: A Closer Look at How Well VLMs Implicitly Link Explicit Matching Visual Cues☆42Updated 3 months ago
- [ICML 2024 Oral] Official code repository for MLLM-as-a-Judge.☆84Updated 7 months ago
- A Comprehensive Survey on Evaluating Reasoning Capabilities in Multimodal Large Language Models.☆68Updated 6 months ago
- 关于LLM和Multimodal LLM的paper list☆46Updated 3 weeks ago
- [CVPR 2024 Highlight] Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding☆313Updated 11 months ago
- 🔥🔥🔥Latest Papers, Codes on Uncertainty-based RL☆49Updated 3 weeks ago
- ☆14Updated 4 months ago
- ☆140Updated 7 months ago
- ☆106Updated last week
- Visualizing the attention of vision-language models☆231Updated 6 months ago
- An Arena-style Automated Evaluation Benchmark for Detailed Captioning☆55Updated 3 months ago
- ☆82Updated last year
- More Thinking, Less Seeing? Assessing Amplified Hallucination in Multimodal Reasoning Models☆53Updated 3 months ago
- ☆101Updated 2 months ago
- ☆49Updated 9 months ago
- A curated collection of resources focused on the Mechanistic Interpretability (MI) of Large Multimodal Models (LMMs). This repository agg…☆131Updated last month
- up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resources☆168Updated last month
- Code for Reducing Hallucinations in Vision-Language Models via Latent Space Steering☆76Updated 9 months ago
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆73Updated last year
- [ICLR'25] Geometric Problem Solving Through Unified Formalized Vision-Language Pre-training☆46Updated 7 months ago
- [EMNLP 2024 Findings] The official PyTorch implementation of EchoSight: Advancing Visual-Language Models with Wiki Knowledge.☆73Updated 3 months ago
- ☆112Updated 3 months ago
- ☆17Updated 8 months ago
- ☆100Updated 5 months ago
- [ICLR 2025] Mitigating Modality Prior-Induced Hallucinations in Multimodal Large Language Models via Deciphering Attention Causality☆39Updated 2 months ago
- Code for "The Devil behind the mask: An emergent safety vulnerability of Diffusion LLMs"☆63Updated last month