dvlab-research / Mr-Ben
This is the repo for our paper "Mr-Ben: A Comprehensive Meta-Reasoning Benchmark for Large Language Models"
☆47Updated 5 months ago
Alternatives and similar repositories for Mr-Ben:
Users that are interested in Mr-Ben are comparing it to the libraries listed below
- ☆59Updated 7 months ago
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆58Updated 3 months ago
- This is the official repo of "QuickLLaMA: Query-aware Inference Acceleration for Large Language Models"☆46Updated 8 months ago
- Code for Paper: Teaching Language Models to Critique via Reinforcement Learning☆84Updated last month
- This repo contains evaluation code for the paper "MileBench: Benchmarking MLLMs in Long Context"☆31Updated 8 months ago
- M-STAR (Multimodal Self-Evolving TrAining for Reasoning) Project. Diving into Self-Evolving Training for Multimodal Reasoning☆56Updated 3 months ago
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.☆60Updated 5 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆107Updated last week
- ☆43Updated 5 months ago
- [NeurIPS 2024] A comprehensive benchmark for evaluating critique ability of LLMs☆39Updated 4 months ago
- ☆62Updated this week
- Watch Every Step! LLM Agent Learning via Iterative Step-level Process Refinement (EMNLP 2024 Main Conference)☆57Updated 5 months ago
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆46Updated 3 months ago
- This the implementation of LeCo☆32Updated 2 months ago
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models☆81Updated 9 months ago
- [NAACL 2025] Source code for MMEvalPro, a more trustworthy and efficient benchmark for evaluating LMMs☆23Updated 6 months ago
- The official repository of the Omni-MATH benchmark.☆78Updated 3 months ago
- BeHonest: Benchmarking Honesty in Large Language Models☆31Updated 7 months ago
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆56Updated 8 months ago
- The code and data for the paper JiuZhang3.0☆43Updated 10 months ago
- [NeurIPS 2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI☆97Updated 3 weeks ago
- The rule-based evaluation subset and code implementation of Omni-MATH☆18Updated 3 months ago
- ☆49Updated last month
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆130Updated 6 months ago
- The official code repository for PRMBench.☆68Updated last month
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆162Updated 9 months ago
- Feeling confused about super alignment? Here is a reading list☆42Updated last year
- ☆17Updated 4 months ago
- A Survey on the Honesty of Large Language Models☆56Updated 3 months ago