MARIO-Math-Reasoning / MARIO_EVALLinks
☆51Updated 6 months ago
Alternatives and similar repositories for MARIO_EVAL
Users that are interested in MARIO_EVAL are comparing it to the libraries listed below
Sorting:
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆171Updated 3 months ago
- Collection of papers for scalable automated alignment.☆93Updated 10 months ago
- [ACL 2024] FollowBench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Models☆111Updated 3 months ago
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆266Updated last year
- ☆68Updated last year
- Official Repo for ICLR 2024 paper MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback by Xingyao Wang*, Ziha…☆129Updated last year
- ☆340Updated 3 months ago
- Explore what LLMs are really leanring over SFT☆29Updated last year
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆321Updated last year
- ☆51Updated 3 months ago
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆187Updated last year
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆81Updated 8 months ago
- ☆17Updated 10 months ago
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆69Updated 9 months ago
- EMNLP'2023: Explore-Instruct: Enhancing Domain-Specific Instruction Coverage through Active Exploration☆36Updated last year
- ☆75Updated last year
- A simple toolkit for benchmarking LLMs on mathematical reasoning tasks. 🧮✨☆254Updated last year
- [ACL 2024] MT-Bench-101: A Fine-Grained Benchmark for Evaluating Large Language Models in Multi-Turn Dialogues☆112Updated last year
- ☆43Updated 5 months ago
- Benchmarking Complex Instruction-Following with Multiple Constraints Composition (NeurIPS 2024 Datasets and Benchmarks Track)☆92Updated 6 months ago
- A new tool learning benchmark aiming at well-balanced stability and reality, based on ToolBench.☆179Updated 5 months ago
- Curation of resources for LLM mathematical reasoning, most of which are screened by @tongyx361 to ensure high quality and accompanied wit…☆137Updated last year
- A large-scale, fine-grained, diverse preference dataset (and models).☆349Updated last year
- ☆280Updated 8 months ago
- [NeurIPS 2024] Can Language Models Learn to Skip Steps?☆20Updated 7 months ago
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆131Updated last year
- CFBench: A Comprehensive Constraints-Following Benchmark for LLMs☆42Updated last year
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆162Updated last year
- ☆309Updated last year
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆63Updated last year