pranonrahman / ChartSummLinks
ChartSum is a large scale benchmark for automatic chart to text summarization
☆11Updated 2 years ago
Alternatives and similar repositories for ChartSumm
Users that are interested in ChartSumm are comparing it to the libraries listed below
Sorting:
- ☆85Updated last year
- ☆124Updated last year
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning☆134Updated 2 years ago
- [NAACL 2024] MMC: Advancing Multimodal Chart Understanding with LLM Instruction Tuning☆95Updated last year
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models☆92Updated last year
- ☆238Updated 9 months ago
- The official dataset of the flowvqa project.☆20Updated last year
- ☆101Updated 2 years ago
- [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.☆85Updated last year
- ☆11Updated last year
- Dataset introduced in PlotQA: Reasoning over Scientific Plots☆82Updated 2 years ago
- ☆47Updated 10 months ago
- Code for ACL 2024 paper "Soft Self-Consistency Improves Language Model Agents"☆25Updated last year
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"☆81Updated last year
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆201Updated 2 months ago
- The trainer for HF to record losses of different tasks and objectives.☆49Updated 10 months ago
- Official github repo of G-LLaVA☆148Updated 11 months ago
- [AAAI 2025]Math-PUMA: Progressive Upward Multimodal Alignment to Enhance Mathematical Reasoning☆42Updated 9 months ago
- Official Repository of MMLONGBENCH-DOC: Benchmarking Long-context Document Understanding with Visualizations☆120Updated 4 months ago
- ☆19Updated 2 years ago
- ☆88Updated last year
- A bug-free and improved implementation of LLaVA-UHD, based on the code from the official repo☆34Updated last year
- ☆68Updated 2 years ago
- Repository for Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning☆168Updated 2 years ago
- [ACL 2024 Oral] This is the code repo for our ACL‘24 paper "MARVEL: Unlocking the Multi-Modal Capability of Dense Retrieval via Visual Mo…☆39Updated last year
- ☆13Updated last year
- [2025-TMLR] A Survey on the Honesty of Large Language Models☆64Updated last year
- [ICLR 2025] ChartMimic: Evaluating LMM’s Cross-Modal Reasoning Capability via Chart-to-Code Generation☆131Updated last month
- M-HalDetect Dataset Release☆27Updated 2 years ago
- [NeurIPS 2024] CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs☆140Updated 9 months ago