RAIVNLab / mnmsLinks
m&ms: A Benchmark to Evaluate Tool-Use for multi-step multi-modal tasks
☆44Updated last year
Alternatives and similar repositories for mnms
Users that are interested in mnms are comparing it to the libraries listed below
Sorting:
- This is the repo for our paper "Mr-Ben: A Comprehensive Meta-Reasoning Benchmark for Large Language Models"☆52Updated last year
- ☆42Updated last year
- [NeurIPS 2024 D&B Track] GTA: A Benchmark for General Tool Agents☆130Updated 8 months ago
- Evaluation framework for paper "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?"☆61Updated last year
- RM-R1: Unleashing the Reasoning Potential of Reward Models☆152Updated 5 months ago
- A Dynamic Visual Benchmark for Evaluating Mathematical Reasoning Robustness of Vision Language Models☆27Updated last year
- A comrephensive collection of learning from rewards in the post-training and test-time scaling of LLMs, with a focus on both reward model…☆58Updated 5 months ago
- ☆31Updated last year
- Trial and Error: Exploration-Based Trajectory Optimization of LLM Agents (ACL 2024 Main Conference)☆159Updated last year
- A curated list of resources on Reinforcement Learning with Verifiable Rewards (RLVR) and the reasoning capability boundary of Large Langu…☆83Updated this week
- [NeurIPS 2024] CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs☆134Updated 7 months ago
- [NeurIPS 2024] A task generation and model evaluation system for multimodal language models.☆73Updated last year
- ☆51Updated 9 months ago
- ☆210Updated 6 months ago
- Official implementation for "MJ-Bench: Is Your Multimodal Reward Model Really a Good Judge for Text-to-Image Generation?"☆49Updated 6 months ago
- [ACL2024] Planning, Creation, Usage: Benchmarking LLMs for Comprehensive Tool Utilization in Real-World Complex Scenarios☆66Updated 4 months ago
- [NeurIPS'24] Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Models☆64Updated 11 months ago
- An Easy-to-use Hallucination Detection Framework for LLMs.☆61Updated last year
- This repository is maintained to release dataset and models for multimodal puzzle reasoning.☆112Updated 9 months ago
- The official implementation of "Ada-LEval: Evaluating long-context LLMs with length-adaptable benchmarks"☆55Updated 6 months ago
- ☆65Updated 5 months ago
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆143Updated last year
- [NeurIPS 2025 Spotlight] Co-Evolving LLM Coder and Unit Tester via Reinforcement Learning☆135Updated 2 months ago
- The official repo for "AceCoder: Acing Coder RL via Automated Test-Case Synthesis" [ACL25]☆94Updated 7 months ago
- Improving Language Understanding from Screenshots. Paper: https://arxiv.org/abs/2402.14073☆31Updated last year
- [ACL 2025] A Neural-Symbolic Self-Training Framework☆117Updated 6 months ago
- Watch Every Step! LLM Agent Learning via Iterative Step-level Process Refinement (EMNLP 2024 Main Conference)☆63Updated last year
- [COLING 2025] ToolEyes: Fine-Grained Evaluation for Tool Learning Capabilities of Large Language Models in Real-world Scenarios☆71Updated 6 months ago
- [NeurIPS 2024] A comprehensive benchmark for evaluating critique ability of LLMs☆48Updated last year
- Code for Paper: Autonomous Evaluation and Refinement of Digital Agents [COLM 2024]☆147Updated last year