JieyuZ2 / TaskMeAnythingLinks
[NeurIPS 2024] A task generation and model evaluation system for multimodal language models.
☆73Updated 10 months ago
Alternatives and similar repositories for TaskMeAnything
Users that are interested in TaskMeAnything are comparing it to the libraries listed below
Sorting:
- OpenVLThinker: An Early Exploration to Vision-Language Reasoning via Iterative Self-Improvement☆113Updated 3 months ago
- Evaluation framework for paper "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?"☆59Updated last year
- [NeurIPS 2025] Think or Not? Selective Reasoning via Reinforcement Learning for Vision-Language Models☆47Updated 3 weeks ago
- This repo contains the code for "MEGA-Bench Scaling Multimodal Evaluation to over 500 Real-World Tasks" [ICLR 2025]☆77Updated 3 months ago
- Official implementation of "PyVision: Agentic Vision with Dynamic Tooling."☆128Updated 3 months ago
- [ICCV 2025] Auto Interpretation Pipeline and many other functionalities for Multimodal SAE Analysis.☆159Updated last month
- [ACL2025 Findings] Benchmarking Multihop Multimodal Internet Agents☆46Updated 7 months ago
- [NAACL 2025 Oral] Multimodal Needle in a Haystack (MMNeedle): Benchmarking Long-Context Capability of Multimodal Large Language Models☆49Updated 5 months ago
- ☆68Updated last month
- ☆60Updated last week
- Geometric-Mean Policy Optimization☆86Updated last week
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆70Updated last year
- The official code of "VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning" [NeurIPS25]☆157Updated 4 months ago
- Code for the paper "AutoPresent: Designing Structured Visuals From Scratch" (CVPR 2025)☆132Updated 5 months ago
- [NeurIPS 2024] MATH-Vision dataset and code to measure multimodal mathematical reasoning capabilities.☆119Updated 5 months ago
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆147Updated 3 weeks ago
- [ICLR 2025] Video-STaR: Self-Training Enables Video Instruction Tuning with Any Supervision☆70Updated last year
- Codes for Visual Sketchpad: Sketching as a Visual Chain of Thought for Multimodal Language Models☆263Updated 2 months ago
- [EMNLP 2025 Main] AlphaOne: Reasoning Models Thinking Slow and Fast at Test Time☆84Updated 4 months ago
- ☆60Updated last month
- [NeurIPS 2024 D&B] VideoGUI: A Benchmark for GUI Automation from Instructional Videos☆45Updated 4 months ago
- Multimodal RewardBench☆54Updated 8 months ago
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆87Updated last year
- Code for Paper: Harnessing Webpage Uis For Text Rich Visual Understanding☆52Updated 10 months ago
- SophiaVL-R1: Reinforcing MLLMs Reasoning with Thinking Reward☆85Updated 2 months ago
- [ACL 2025] A Generalizable and Purely Unsupervised Self-Training Framework☆71Updated 4 months ago
- Code for paper "Unraveling Cross-Modality Knowledge Conflicts in Large Vision-Language Models."☆48Updated last year
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆109Updated 4 months ago
- Reinforcement Learning of Vision Language Models with Self Visual Perception Reward☆136Updated last month
- [ICCV 2025 Highlight] The official repository for "2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining"☆174Updated 7 months ago