math-eval / MathEval
MathEval is a benchmark dedicated to the holistic evaluation on mathematical capacities of LLMs.
☆74Updated 4 months ago
Alternatives and similar repositories for MathEval:
Users that are interested in MathEval are comparing it to the libraries listed below
- Benchmarking Complex Instruction-Following with Multiple Constraints Composition (NeurIPS 2024 Datasets and Benchmarks Track)☆74Updated last month
- ☆95Updated last year
- ☆80Updated last year
- ☆81Updated 11 months ago
- The implementation of paper "LLM Critics Help Catch Bugs in Mathematics: Towards a Better Mathematical Verifier with Natural Language Fee…☆38Updated 8 months ago
- Logiqa2.0 dataset - logical reasoning in MRC and NLI tasks☆90Updated last year
- Code and data for the paper "Can Large Language Models Understand Real-World Complex Instructions?"(AAAI2024)☆48Updated 11 months ago
- MEASURING MASSIVE MULTITASK CHINESE UNDERSTANDING☆87Updated last year
- Official implementation of the paper "From Complex to Simple: Enhancing Multi-Constraint Complex Instruction Following Ability of Large L…☆47Updated 9 months ago
- ☆46Updated last month
- This is the repository for paper "CREATOR: Tool Creation for Disentangling Abstract and Concrete Reasoning of Large Language Models"☆23Updated last year
- Collection of papers for scalable automated alignment.☆86Updated 5 months ago
- ☆45Updated 9 months ago
- ☆142Updated 8 months ago
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆45Updated 2 months ago
- 🐋 An unofficial implementation of Self-Alignment with Instruction Backtranslation.☆137Updated 9 months ago
- Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"☆124Updated 9 months ago
- Do Large Language Models Know What They Don’t Know?☆92Updated 4 months ago
- Official github repo for AutoDetect, an automated weakness detection framework for LLMs.☆42Updated 9 months ago
- The official repository of the Omni-MATH benchmark.☆77Updated 3 months ago
- We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.☆60Updated 4 months ago
- Unofficial implementation of AlpaGasus☆90Updated last year
- [EMNLP 2023 Demo] CLEVA: Chinese Language Models EVAluation Platform☆62Updated last month
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆147Updated 6 months ago
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆107Updated 6 months ago
- Repository for the paper "Cognitive Mirage: A Review of Hallucinations in Large Language Models"☆47Updated last year
- [ACL 2024] MT-Bench-101: A Fine-Grained Benchmark for Evaluating Large Language Models in Multi-Turn Dialogues☆77Updated 8 months ago
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆78Updated last year
- EMNLP'2024: Knowledge Verification to Nip Hallucination in the Bud☆22Updated last year
- [COLING 2025] ToolEyes: Fine-Grained Evaluation for Tool Learning Capabilities of Large Language Models in Real-world Scenarios☆65Updated 3 months ago