math-eval / MathEvalLinks
MathEval is a benchmark dedicated to the holistic evaluation on mathematical capacities of LLMs.
☆83Updated 8 months ago
Alternatives and similar repositories for MathEval
Users that are interested in MathEval are comparing it to the libraries listed below
Sorting:
- Benchmarking Complex Instruction-Following with Multiple Constraints Composition (NeurIPS 2024 Datasets and Benchmarks Track)☆87Updated 5 months ago
- ☆144Updated last year
- ☆84Updated last year
- Logiqa2.0 dataset - logical reasoning in MRC and NLI tasks☆95Updated last year
- InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuning☆263Updated last year
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆162Updated 3 weeks ago
- Do Large Language Models Know What They Don’t Know?☆98Updated 8 months ago
- ☆142Updated last year
- Collection of papers for scalable automated alignment.☆92Updated 8 months ago
- ☆83Updated last year
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆266Updated 10 months ago
- 🐋 An unofficial implementation of Self-Alignment with Instruction Backtranslation.☆140Updated 2 months ago
- [COLING 2025] ToolEyes: Fine-Grained Evaluation for Tool Learning Capabilities of Large Language Models in Real-world Scenarios☆68Updated 2 months ago
- Code and data for the paper "Can Large Language Models Understand Real-World Complex Instructions?"(AAAI2024)☆48Updated last year
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆127Updated last year
- Towards Systematic Measurement for Long Text Quality☆36Updated 10 months ago
- Counting-Stars (★)☆83Updated last month
- Unofficial implementation of AlpaGasus☆92Updated last year
- Reformatted Alignment☆113Updated 9 months ago
- ☆96Updated last year
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆114Updated last year
- Data and Code for Program of Thoughts [TMLR 2023]☆279Updated last year
- Code implementation of synthetic continued pretraining☆118Updated 6 months ago
- Generative Judge for Evaluating Alignment☆244Updated last year
- [ACL 2024 Findings] MathBench: A Comprehensive Multi-Level Difficulty Mathematics Evaluation Dataset☆104Updated last month
- Repository for the paper "Cognitive Mirage: A Review of Hallucinations in Large Language Models"☆47Updated last year
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆145Updated 8 months ago
- Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"☆130Updated last year
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆178Updated last year
- ☆50Updated last year