math-eval / MathEvalLinks
MathEval is a benchmark dedicated to the holistic evaluation on mathematical capacities of LLMs.
β84Updated 8 months ago
Alternatives and similar repositories for MathEval
Users that are interested in MathEval are comparing it to the libraries listed below
Sorting:
- π An unofficial implementation of Self-Alignment with Instruction Backtranslation.β140Updated 3 months ago
- Benchmarking Complex Instruction-Following with Multiple Constraints Composition (NeurIPS 2024 Datasets and Benchmarks Track)β91Updated 5 months ago
- β144Updated last year
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuningβ166Updated last month
- β83Updated last year
- Collection of papers for scalable automated alignment.β93Updated 9 months ago
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Modelsβ268Updated 11 months ago
- InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuningβ267Updated last year
- Reformatted Alignmentβ113Updated 10 months ago
- Counting-Stars (β )β83Updated 2 months ago
- β49Updated last year
- Logiqa2.0 dataset - logical reasoning in MRC and NLI tasksβ99Updated 2 years ago
- β50Updated last year
- Code implementation of synthetic continued pretrainingβ123Updated 7 months ago
- [ICML'2024] Can AI Assistants Know What They Don't Know?β82Updated last year
- [COLING 2025] ToolEyes: Fine-Grained Evaluation for Tool Learning Capabilities of Large Language Models in Real-world Scenariosβ68Updated 2 months ago
- β103Updated 8 months ago
- β96Updated last year
- Unofficial implementation of AlpaGasusβ92Updated last year
- Do Large Language Models Know What They Donβt Know?β99Updated 9 months ago
- Generative Judge for Evaluating Alignmentβ244Updated last year
- The implementation of paper "LLM Critics Help Catch Bugs in Mathematics: Towards a Better Mathematical Verifier with Natural Language Feeβ¦β38Updated last year
- [ICML 2024] Selecting High-Quality Data for Training Language Modelsβ184Updated last year
- β145Updated last year
- Data and code for paper "M3Exam: A Multilingual, Multimodal, Multilevel Benchmark for Examining Large Language Models"β101Updated 2 years ago
- β301Updated last year
- [ACL-25] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.β63Updated 9 months ago
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".β80Updated 6 months ago
- β83Updated last year
- Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"β132Updated last year