Alpha-Innovator / ChartVLMLinks
Official Repository of ChartX & ChartVLM: A Versatile Benchmark and Foundation Model for Complicated Chart Reasoning
☆246Updated last year
Alternatives and similar repositories for ChartVLM
Users that are interested in ChartVLM are comparing it to the libraries listed below
Sorting:
- [ECCV 2024] Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems?☆175Updated 7 months ago
- [ICLR 2025] Mathematical Visual Instruction Tuning for Multi-modal Large Language Models☆152Updated last year
- (AAAI 2024) BLIVA: A Simple Multimodal LLM for Better Handling of Text-rich Visual Questions☆260Updated last year
- Recipes to train the self-rewarding reasoning LLMs.☆229Updated 9 months ago
- DocGenome: An Open Large-scale Scientific Document Benchmark for Training and Testing Multi-modal Large Models☆147Updated 10 months ago
- Reverse Chain-of-Thought Problem Generation for Geometric Reasoning in Large Multimodal Models☆180Updated last year
- ☆401Updated 11 months ago
- An open-source implementation for training LLaVA-NeXT.☆428Updated last year
- [ACL 2024] ChartAssistant is a chart-based vision-language model for universal chart comprehension and reasoning.☆131Updated last year
- Chain-of-Spot: Interactive Reasoning Improves Large Vision-language Models☆99Updated last year
- 🚀 [NeurIPS24] Make Vision Matter in Visual-Question-Answering (VQA)! Introducing NaturalBench, a vision-centric VQA benchmark (NeurIPS'2…☆88Updated 5 months ago
- ☆244Updated 11 months ago
- u-LLaVA: Unifying Multi-Modal Tasks via Large Language Model☆134Updated 7 months ago
- Explore concepts like Self-Correct, Self-Refine, Self-Improve, Self-Contradict, Self-Play, and Self-Knowledge, alongside o1-like reasonin…☆171Updated 11 months ago
- The code for "TokenPacker: Efficient Visual Projector for Multimodal LLM", IJCV2025☆271Updated 6 months ago
- A library for generating difficulty-scalable, multi-tool, and verifiable agentic tasks with execution trajectories.☆169Updated 5 months ago
- (ECCV 2024) Empowering Multimodal Large Language Model as a Powerful Data Generator☆114Updated 8 months ago
- A curated collection of resources, tools, and frameworks for developing GUI Agents.☆198Updated last week
- ✨✨R1-Reward: Training Multimodal Reward Model Through Stable Reinforcement Learning☆270Updated 6 months ago
- Grimoire is All You Need for Enhancing Large Language Models☆116Updated last year
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆117Updated last year
- Official implementation of "SEAgent: Self-Evolving Computer Use Agent with Autonomous Learning from Experience"☆210Updated 3 months ago
- A scalable, end-to-end training pipeline for general-purpose agents☆361Updated 5 months ago
- [NeurIPS 2025 Poster] Search and Refine During Think: Facilitating Knowledge Refinement for Improved Retrieval-Augmented Reasoning☆111Updated 2 weeks ago
- [ECCV2024] Grounded Multimodal Large Language Model with Localized Visual Tokenization☆579Updated last year
- ☆251Updated 6 months ago
- ☆352Updated last year
- [ICLR 2025] MLLM for On-Demand Spatial-Temporal Understanding at Arbitrary Resolution☆329Updated 5 months ago
- Official code of paper "Beyond 'Aha!': Toward Systematic Meta-Abilities Alignment in Large Reasoning Models"☆82Updated 6 months ago
- [AAAI 2026] GUI-G²: Gaussian Reward Modeling for GUI Grounding☆240Updated 3 weeks ago