henrykmichalewski / math-evalsLinks
Math evaluations of llama models.
☆10Updated last year
Alternatives and similar repositories for math-evals
Users that are interested in math-evals are comparing it to the libraries listed below
Sorting:
- Official Repo for ICLR 2024 paper MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback by Xingyao Wang*, Ziha…☆131Updated last year
- A distributed, extensible, secure solution for evaluating machine generated code with unit tests in multiple programming languages.☆56Updated last year
- Generate the WizardCoder Instruct from the CodeAlpaca☆21Updated 2 years ago
- ☆111Updated last year
- ☆12Updated 3 months ago
- ☆69Updated last year
- Open Source WizardCoder Dataset☆160Updated 2 years ago
- ✨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024☆174Updated last year
- Repository for Decomposed Prompting☆93Updated last year
- [LREC-COLING'24] HumanEval-XL: A Multilingual Code Generation Benchmark for Cross-lingual Natural Language Generalization☆38Updated 7 months ago
- Code for the TMLR 2023 paper "PPOCoder: Execution-based Code Generation using Deep Reinforcement Learning"☆117Updated last year
- ☆75Updated last year
- Awesome LLM Self-Consistency: a curated list of Self-consistency in Large Language Models☆110Updated 3 months ago
- ☆280Updated 9 months ago
- Collection of papers for scalable automated alignment.☆94Updated last year
- Code for the paper <SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning>☆49Updated 2 years ago
- The repository for paper "DebugBench: "Evaluating Debugging Capability of Large Language Models".☆83Updated last year
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)☆159Updated 2 months ago
- [ACL 2023] Learning Multi-step Reasoning by Solving Arithmetic Tasks. https://arxiv.org/abs/2306.01707☆24Updated 2 years ago
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆132Updated last year
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srw☆63Updated last year
- Data and Code for Program of Thoughts [TMLR 2023]☆292Updated last year
- Synthetic question-answering dataset to formally analyze the chain-of-thought output of large language models on a reasoning task.☆150Updated last month
- ☆44Updated 7 months ago
- Code for paper "LEVER: Learning to Verifiy Language-to-Code Generation with Execution" (ICML'23)☆90Updated 2 years ago
- Repo for the paper "Large Language Models Struggle to Learn Long-Tail Knowledge"☆78Updated 2 years ago
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆174Updated 5 months ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆122Updated last year
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆154Updated last year
- [ICML 2023] Data and code release for the paper "DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation".☆256Updated last year