RAIVNLab / mnmsLinks
m&ms: A Benchmark to Evaluate Tool-Use for multi-step multi-modal tasks
☆44Updated last year
Alternatives and similar repositories for mnms
Users that are interested in mnms are comparing it to the libraries listed below
Sorting:
- This is the repo for our paper "Mr-Ben: A Comprehensive Meta-Reasoning Benchmark for Large Language Models"☆51Updated last year
- Evaluation framework for paper "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?"☆63Updated last year
- [NeurIPS 2024 D&B Track] GTA: A Benchmark for General Tool Agents☆133Updated 10 months ago
- RM-R1: Unleashing the Reasoning Potential of Reward Models☆158Updated 7 months ago
- Code for Paper: Autonomous Evaluation and Refinement of Digital Agents [COLM 2024]☆148Updated last year
- Trial and Error: Exploration-Based Trajectory Optimization of LLM Agents (ACL 2024 Main Conference)☆159Updated last year
- ☆42Updated last year
- [ACL 2024 Findings] CriticBench: Benchmarking LLMs for Critique-Correct Reasoning☆29Updated last year
- Improving Language Understanding from Screenshots. Paper: https://arxiv.org/abs/2402.14073☆31Updated last year
- An Easy-to-use Hallucination Detection Framework for LLMs.☆63Updated last year
- B-STAR: Monitoring and Balancing Exploration and Exploitation in Self-Taught Reasoners☆85Updated 8 months ago
- [NeurIPS 2024] CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs☆139Updated 9 months ago
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆147Updated last year
- Watch Every Step! LLM Agent Learning via Iterative Step-level Process Refinement (EMNLP 2024 Main Conference)☆65Updated last year
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learning☆120Updated 9 months ago
- A comrephensive collection of learning from rewards in the post-training and test-time scaling of LLMs, with a focus on both reward model…☆62Updated 7 months ago
- This repository is maintained to release dataset and models for multimodal puzzle reasoning.☆113Updated 11 months ago
- The code and data for the paper JiuZhang3.0☆49Updated last year
- [NeurIPS 2024] A comprehensive benchmark for evaluating critique ability of LLMs☆49Updated last year
- [ICLR 2024] Trajectory-as-Exemplar Prompting with Memory for Computer Control☆66Updated 3 weeks ago
- ☆54Updated 11 months ago
- [ACL 2024] The project of Symbol-LLM☆59Updated last year
- The official implementation of "Ada-LEval: Evaluating long-context LLMs with length-adaptable benchmarks"☆56Updated 8 months ago
- [ACL 2025] A Neural-Symbolic Self-Training Framework☆117Updated 8 months ago
- ☆101Updated 2 years ago
- [NeurIPS 2024] A task generation and model evaluation system for multimodal language models.☆73Updated last year
- [ACL 2024] PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chain☆106Updated last year
- [ACL2024] Planning, Creation, Usage: Benchmarking LLMs for Comprehensive Tool Utilization in Real-World Complex Scenarios☆68Updated 6 months ago
- ☆103Updated 2 years ago