math-eval / MathEvalLinks
MathEval is a benchmark dedicated to the holistic evaluation on mathematical capacities of LLMs.
☆83Updated 10 months ago
Alternatives and similar repositories for MathEval
Users that are interested in MathEval are comparing it to the libraries listed below
Sorting:
- Benchmarking Complex Instruction-Following with Multiple Constraints Composition (NeurIPS 2024 Datasets and Benchmarks Track)☆93Updated 7 months ago
- ☆147Updated last year
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆175Updated 3 months ago
- Do Large Language Models Know What They Don’t Know?☆99Updated 10 months ago
- InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuning☆273Updated 2 years ago
- 🐋 An unofficial implementation of Self-Alignment with Instruction Backtranslation.☆139Updated 4 months ago
- ☆83Updated last year
- ☆83Updated last year
- Code implementation of synthetic continued pretraining☆129Updated 8 months ago
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models