zengxingchen / LLM-Visualization-Paper-ListLinks
Awesome-Paper-list: Visualization meets LLM
☆45Updated last month
Alternatives and similar repositories for LLM-Visualization-Paper-List
Users that are interested in LLM-Visualization-Paper-List are comparing it to the libraries listed below
Sorting:
- [IEEE VIS 2024] LLaVA-Chart: Advancing Multimodal Large Language Models in Chart Question Answering with Visualization-Referenced Instruc…☆68Updated 5 months ago
- 关于LLM和Multimodal LLM的paper list☆42Updated 2 weeks ago
- A Comprehensive Survey on Evaluating Reasoning Capabilities in Multimodal Large Language Models.☆64Updated 3 months ago
- [ICML 2024 Oral] Official code repository for MLLM-as-a-Judge.☆70Updated 4 months ago
- ☆13Updated last month
- Survey on Data-centric Large Language Models☆83Updated 11 months ago
- [ICLR 2025] ChartMimic: Evaluating LMM’s Cross-Modal Reasoning Capability via Chart-to-Code Generation☆111Updated last week
- [ACL'25 Main] ChartCoder: Advancing Multimodal Large Language Model for Chart-to-Code Generation☆51Updated this week
- ☆64Updated 3 weeks ago
- Visualizing the attention of vision-language models☆188Updated 3 months ago
- A benchmark designed to evaluate visualization generation methods.☆41Updated 2 months ago
- Interleaving Reasoning: Next-Generation Reasoning Systems for AGI☆30Updated last week
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆65Updated 11 months ago
- ☆101Updated this week
- ☆122Updated 4 months ago
- [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.☆75Updated 7 months ago
- This repository will continuously update the latest papers, technical reports, benchmarks about multimodal reasoning!☆45Updated 3 months ago
- VLM2-Bench [ACL 2025 Main]: A Closer Look at How Well VLMs Implicitly Link Explicit Matching Visual Cues☆40Updated last month
- [ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation☆87Updated 6 months ago
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'☆179Updated 3 weeks ago
- [ACL 2024] ChartAssistant is a chart-based vision-language model for universal chart comprehension and reasoning.☆119Updated 9 months ago
- [EMNLP 2024 Findings] The official PyTorch implementation of EchoSight: Advancing Visual-Language Models with Wiki Knowledge.☆63Updated last week
- ☆74Updated last year
- A Self-Training Framework for Vision-Language Reasoning☆80Updated 5 months ago
- [ACL 2024] Logical Closed Loop: Uncovering Object Hallucinations in Large Vision-Language Models. Detect and mitigate object hallucinatio…☆22Updated 4 months ago
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆79Updated 5 months ago
- A curated collection of resources focused on the Mechanistic Interpretability (MI) of Large Multimodal Models (LMMs). This repository agg…☆89Updated last week
- [ICLR2025 Oral] ChartMoE: Mixture of Diversely Aligned Expert Connector for Chart Understanding☆84Updated 2 months ago
- [ICML 2025] Official implementation of paper 'Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in…☆136Updated last week
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆73Updated 4 months ago