dengc2023 / LongDocURLLinks
☆35Updated 4 months ago
Alternatives and similar repositories for LongDocURL
Users that are interested in LongDocURL are comparing it to the libraries listed below
Sorting:
- ☆85Updated last year
- [ICLR 2025] ChartMimic: Evaluating LMM’s Cross-Modal Reasoning Capability via Chart-to-Code Generation☆129Updated 2 weeks ago
- [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.☆83Updated last year
- Official Repository of MMLONGBENCH-DOC: Benchmarking Long-context Document Understanding with Visualizations☆112Updated 2 months ago
- A Self-Training Framework for Vision-Language Reasoning☆88Updated 10 months ago
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models☆92Updated last year
- ☆100Updated last year
- [NeurIPS 2025] NoisyRollout: Reinforcing Visual Reasoning with Data Augmentation☆101Updated 3 months ago
- Offical Repository of "AtomThink: Multimodal Slow Thinking with Atomic Step Reasoning"☆57Updated last month
- ☆46Updated 8 months ago
- [ACL' 25] The official code repository for PRMBench: A Fine-grained and Challenging Benchmark for Process-Level Reward Models.☆85Updated 10 months ago
- A Comprehensive Survey on Evaluating Reasoning Capabilities in Multimodal Large Language Models.☆71Updated 9 months ago
- RM-R1: Unleashing the Reasoning Potential of Reward Models☆154Updated 5 months ago
- [ICLR2025 Oral] ChartMoE: Mixture of Diversely Aligned Expert Connector for Chart Understanding☆94Updated 8 months ago
- The official code of "VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning" [NeurIPS25]☆169Updated 6 months ago
- [ACL 2025] A Neural-Symbolic Self-Training Framework☆117Updated 6 months ago
- Research works from Tencent AI Lab regarding self-evolving agents☆73Updated 3 months ago
- ☆127Updated last month
- Official Repository of LatentSeek☆70Updated 6 months ago
- A comrephensive collection of learning from rewards in the post-training and test-time scaling of LLMs, with a focus on both reward model…☆60Updated 6 months ago
- Official PyTorch Implementation of MLLM Is a Strong Reranker: Advancing Multimodal Retrieval-augmented Generation via Knowledge-enhanced …☆89Updated last year
- A RLHF Infrastructure for Vision-Language Models☆187Updated last year
- [ACL 2024 Oral] This is the code repo for our ACL‘24 paper "MARVEL: Unlocking the Multi-Modal Capability of Dense Retrieval via Visual Mo…☆39Updated last year
- Description for MV-MATH☆15Updated 5 months ago
- Paper collections of multi-modal LLM for Math/STEM/Code.☆131Updated last month
- [ArXiv] V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding☆58Updated last year
- MAT: Multi-modal Agent Tuning 🔥 ICLR 2025 (Spotlight)☆77Updated 6 months ago
- [NeurIPS 2024] MATH-Vision dataset and code to measure multimodal mathematical reasoning capabilities.☆126Updated 7 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆109Updated 6 months ago
- Large Language Models Can Self-Improve in Long-context Reasoning☆73Updated last year