Alpha-Innovator / ChartVLMLinks
Official Repository of ChartX & ChartVLM: A Versatile Benchmark and Foundation Model for Complicated Chart Reasoning
☆235Updated 11 months ago
Alternatives and similar repositories for ChartVLM
Users that are interested in ChartVLM are comparing it to the libraries listed below
Sorting:
- [ECCV 2024] Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems?☆169Updated 4 months ago
- [ICLR 2025] Mathematical Visual Instruction Tuning for Multi-modal Large Language Models☆149Updated 8 months ago
- (AAAI 2024) BLIVA: A Simple Multimodal LLM for Better Handling of Text-rich Visual Questions☆262Updated last year
- Reverse Chain-of-Thought Problem Generation for Geometric Reasoning in Large Multimodal Models☆178Updated 9 months ago
- ☆238Updated 8 months ago
- An open-source implementation for training LLaVA-NeXT.☆417Updated 10 months ago
- Chain-of-Spot: Interactive Reasoning Improves Large Vision-language Models☆96Updated last year
- u-LLaVA: Unifying Multi-Modal Tasks via Large Language Model☆134Updated 4 months ago
- ☆397Updated 8 months ago
- 🚀 [NeurIPS24] Make Vision Matter in Visual-Question-Answering (VQA)! Introducing NaturalBench, a vision-centric VQA benchmark (NeurIPS'2…☆86Updated 2 months ago
- Recipes to train the self-rewarding reasoning LLMs.☆225Updated 5 months ago
- DocGenome: An Open Large-scale Scientific Document Benchmark for Training and Testing Multi-modal Large Models☆142Updated 7 months ago
- Explore concepts like Self-Correct, Self-Refine, Self-Improve, Self-Contradict, Self-Play, and Self-Knowledge, alongside o1-like reasonin…☆169Updated 8 months ago
- (ECCV 2024) Empowering Multimodal Large Language Model as a Powerful Data Generator☆113Updated 5 months ago
- The code for "TokenPacker: Efficient Visual Projector for Multimodal LLM", IJCV2025☆265Updated 3 months ago
- [ACL 2024] ChartAssistant is a chart-based vision-language model for universal chart comprehension and reasoning.☆123Updated 11 months ago
- ☆228Updated 3 months ago
- ✨✨R1-Reward: Training Multimodal Reward Model Through Stable Reinforcement Learning☆251Updated 3 months ago
- [ICLR 2025] MLLM for On-Demand Spatial-Temporal Understanding at Arbitrary Resolution☆320Updated last month
- This is the official code repository of MoTCoder: Elevating Large Language Models with Modular of Thought for Challenging Programming Tas…☆85Updated 5 months ago
- The repository for the paper titled "Leopard: A Vision Language Model For Text-Rich Multi-Image Tasks"☆160Updated 8 months ago
- A library for generating difficulty-scalable, multi-tool, and verifiable agentic tasks with execution trajectories.☆158Updated last month
- ✨✨Long-VITA: Scaling Large Multi-modal Models to 1 Million Tokens with Leading Short-Context Accuracy☆296Updated 3 months ago
- [ECCV2024] Grounded Multimodal Large Language Model with Localized Visual Tokenization☆579Updated last year
- ☆343Updated last year
- Grimoire is All You Need for Enhancing Large Language Models☆116Updated last year
- A collection of multimodal reasoning papers, codes, datasets, benchmarks and resources.☆292Updated this week
- Efficient Reasoning Vision Language Models☆366Updated this week
- A curated collection of resources, tools, and frameworks for developing GUI Agents.☆132Updated this week
- Official implementation of "SEAgent: Self-Evolving Computer Use Agent with Autonomous Learning from Experience"☆181Updated 3 weeks ago