tingxueronghua / ChartLlama-codeLinks
☆249Updated last year
Alternatives and similar repositories for ChartLlama-code
Users that are interested in ChartLlama-code are comparing it to the libraries listed below
Sorting:
- Dataset and Code for our ACL 2024 paper: "Multimodal Table Understanding". We propose the first large-scale Multimodal IFT and Pre-Train …☆218Updated 4 months ago
 - ☆233Updated last year
 - [ACM'MM 2024 Oral] Official code for "OneChart: Purify the Chart Structural Extraction via One Auxiliary Token"☆253Updated 6 months ago
 - This is the official repository for Retrieval Augmented Visual Question Answering☆238Updated 10 months ago
 - Repo for Benchmarking Multimodal Retrieval Augmented Generation with Dynamic VQA Dataset and Self-adaptive Planning Agent☆387Updated 6 months ago
 - ☆330Updated last year
 - The huggingface implementation of Fine-grained Late-interaction Multi-modal Retriever.☆100Updated 5 months ago
 - A curated list of recent and past chart understanding work based on our IEEE TKDE survey paper: From Pixels to Insights: A Survey on Auto…☆226Updated 4 months ago
 - A Toolkit for Table-based Question Answering☆114Updated 2 years ago
 - [ACL 2024] ChartAssistant is a chart-based vision-language model for universal chart comprehension and reasoning.☆130Updated last year
 - ☆141Updated last year
 - MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆356Updated last year
 - TableLLM: Enabling Tabular Data Manipulation by LLMs in Real Office Usage Scenarios☆234Updated 2 months ago
 - Official Repository of MMLONGBENCH-DOC: Benchmarking Long-context Document Understanding with Visualizations☆101Updated last month
 - ☆224Updated 6 months ago
 - LongQLoRA: Extent Context Length of LLMs Efficiently☆166Updated last year
 - Document Artifical Intelligence☆189Updated last month
 - [NAACL 2024] MMC: Advancing Multimodal Chart Understanding with LLM Instruction Tuning☆96Updated 9 months ago
 - official code for "Fox: Focus Anywhere for Fine-grained Multi-page Document Understanding"☆176Updated last year
 - Codes for VPGTrans: Transfer Visual Prompt Generator across LLMs. VL-LLaMA, VL-Vicuna.☆270Updated 2 years ago
 - ☆81Updated last year
 - [ACL2024] T-Eval: Evaluating Tool Utilization Capability of Large Language Models Step by Step☆294Updated last year
 - [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆294Updated last year
 - Evaluating LLMs' multi-round chatting capability via assessing conversations generated by two LLM instances.☆158Updated 5 months ago
 - On the Hidden Mystery of OCR in Large Multimodal Models (OCRBench)☆737Updated 3 months ago
 - [ACL 2025 Oral] 🔥🔥 MegaPairs: Massive Data Synthesis for Universal Multimodal Retrieval☆229Updated 5 months ago
 - Code/Data for the paper: "LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding"☆268Updated last year
 - Official code for paper "UniIR: Training and Benchmarking Universal Multimodal Information Retrievers" (ECCV 2024)☆166Updated last year
 - [CVPR'25 highlight] RLAIF-V: Open-Source AI Feedback Leads to Super GPT-4V Trustworthiness☆421Updated 5 months ago
 - ☆79Updated last year