JieyuZ2 / TaskMeAnythingLinks
[NeurIPS 2024] A task generation and model evaluation system for multimodal language models.
☆73Updated 9 months ago
Alternatives and similar repositories for TaskMeAnything
Users that are interested in TaskMeAnything are comparing it to the libraries listed below
Sorting:
- OpenVLThinker: An Early Exploration to Vision-Language Reasoning via Iterative Self-Improvement☆108Updated last month
- Evaluation framework for paper "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?"☆59Updated 10 months ago
- This repo contains the code for "MEGA-Bench Scaling Multimodal Evaluation to over 500 Real-World Tasks" [ICLR2025]☆76Updated 2 months ago
- [ACL2025 Findings] Benchmarking Multihop Multimodal Internet Agents☆46Updated 6 months ago
- Think or Not? Selective Reasoning via Reinforcement Learning for Vision-Language Models☆41Updated last month
- ☆63Updated this week
- Official implementation of "PyVision: Agentic Vision with Dynamic Tooling."☆125Updated last month
- [NAACL 2025 Oral] Multimodal Needle in a Haystack (MMNeedle): Benchmarking Long-Context Capability of Multimodal Large Language Models☆48Updated 4 months ago
- The official code of "VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning"☆143Updated 3 months ago
- Code for the paper "AutoPresent: Designing Structured Visuals From Scratch" (CVPR 2025)☆123Updated 3 months ago
- Codes for Visual Sketchpad: Sketching as a Visual Chain of Thought for Multimodal Language Models☆255Updated last month
- [ICCV 2025] Auto Interpretation Pipeline and many other functionalities for Multimodal SAE Analysis.☆150Updated 2 months ago
- [EMNLP 2025 Main] AlphaOne: Reasoning Models Thinking Slow and Fast at Test Time☆81Updated 3 months ago
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆86Updated last year
- ☆55Updated last week
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆70Updated last year
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆137Updated last year
- ☆93Updated 2 months ago
- official code for "BoostStep: Boosting mathematical capability of Large Language Models via improved single-step reasoning"☆37Updated 7 months ago
- Multimodal RewardBench☆46Updated 6 months ago
- ☆53Updated 3 months ago
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆43Updated last year
- [ACL 2025] A Generalizable and Purely Unsupervised Self-Training Framework☆71Updated 3 months ago
- Official implementation of "Automated Generation of Challenging Multiple-Choice Questions for Vision Language Model Evaluation" (CVPR 202…☆36Updated 3 months ago
- Code for Paper: Harnessing Webpage Uis For Text Rich Visual Understanding☆53Updated 9 months ago
- Machine Mental Imagery: Empower Multimodal Reasoning with Latent Visual Tokens (arXiv 2025)☆152Updated last month
- ☆74Updated 2 months ago
- ☆16Updated 10 months ago
- [ICCV 2025 Highlight] The official repository for "2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining"☆169Updated 5 months ago
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models☆89Updated last year