Relaxed-System-Lab / multi-actor-data-selectionLinks
This is the repo for the paper Multi-Agent Collaborative Data Selection for Efficient LLM Pretraining.
☆43Updated 7 months ago
Alternatives and similar repositories for multi-actor-data-selection
Users that are interested in multi-actor-data-selection are comparing it to the libraries listed below
Sorting:
- Code & Dataset for Paper: "Distill Visual Chart Reasoning Ability from LLMs to MLLMs"☆54Updated 8 months ago
- ☆136Updated last month
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models☆86Updated last year
- Evaluation framework for paper "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?"☆57Updated 9 months ago
- Official repo for "PAPO: Perception-Aware Policy Optimization for Multimodal Reasoning"☆60Updated this week
- Large Language Models Can Self-Improve in Long-context Reasoning☆71Updated 7 months ago
- NoisyRollout: Reinforcing Visual Reasoning with Data Augmentation☆78Updated last month
- ☆89Updated last week
- [NeurIPS 2024] MATH-Vision dataset and code to measure multimodal mathematical reasoning capabilities.☆108Updated 2 months ago
- [ICLR2025 Oral] ChartMoE: Mixture of Diversely Aligned Expert Connector for Chart Understanding☆86Updated 3 months ago
- ☆32Updated 3 months ago
- The official code of "VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning"☆128Updated last month
- ☆49Updated 4 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆105Updated last month
- official code for "BoostStep: Boosting mathematical capability of Large Language Models via improved single-step reasoning"☆36Updated 5 months ago
- Github repository for "Bring Reason to Vision: Understanding Perception and Reasoning through Model Merging" (ICML 2025)☆64Updated last month
- Official implement of MIA-DPO☆59Updated 5 months ago
- A Self-Training Framework for Vision-Language Reasoning☆80Updated 5 months ago
- OpenVLThinker: An Early Exploration to Vision-Language Reasoning via Iterative Self-Improvement☆93Updated last week
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆79Updated 5 months ago
- ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration☆46Updated 6 months ago
- [IEEE VIS 2024] LLaVA-Chart: Advancing Multimodal Large Language Models in Chart Question Answering with Visualization-Referenced Instruc…☆68Updated 5 months ago
- MultiMath: Bridging Visual and Mathematical Reasoning for Large Language Models☆29Updated 5 months ago
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆69Updated last year
- Official repository of the video reasoning benchmark MMR-V. Can Your MLLMs "Think with Video"?☆31Updated 3 weeks ago
- ☆84Updated 6 months ago
- Official PyTorch Implementation of MLLM Is a Strong Reranker: Advancing Multimodal Retrieval-augmented Generation via Knowledge-enhanced …☆78Updated 8 months ago
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆83Updated 10 months ago
- An benchmark for evaluating the capabilities of large vision-language models (LVLMs)☆46Updated last year
- ☆54Updated 4 months ago