GanjinZero / math401-llmLinks
Source codes and datasets for How well do Large Language Models perform in Arithmetic tasks?
β56Updated 2 years ago
Alternatives and similar repositories for math401-llm
Users that are interested in math401-llm are comparing it to the libraries listed below
Sorting:
- [ICLR24] The open-source repo of THU-KEG's KoLA benchmark.β51Updated 2 years ago
- π©Ί A collection of ChatGPT evaluation reports on various bechmarks.β50Updated 2 years ago
- Logiqa2.0 dataset - logical reasoning in MRC and NLI tasksβ99Updated 2 years ago
- [ICLR 2024] COLLIE: Systematic Construction of Constrained Text Generation Tasksβ55Updated 2 years ago
- β56Updated last year
- Code and data for "Dynosaur: A Dynamic Growth Paradigm for Instruction-Tuning Data Curation" (EMNLP 2023)β64Updated last year
- β49Updated 2 years ago
- [ICLR 2023] Codebase for Copy-Generator model, including an implementation of kNN-LMβ188Updated 8 months ago
- Code for ACL2023 paper: Pre-Training to Learn in Contextβ107Updated last year
- β17Updated 7 months ago
- Code for the paper Code for the paper InstructDial: Improving Zero and Few-shot Generalization in Dialogue through Instruction Tuningβ100Updated 2 years ago
- β67Updated 3 years ago
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.β63Updated last year
- Repository for Decomposed Promptingβ95Updated last year
- Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.β133Updated 2 years ago
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Followingβ131Updated last year
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]β148Updated 11 months ago
- β86Updated 2 years ago
- Resources for our ACL 2023 paper: Distilling Script Knowledge from Large Language Models for Constrained Language Planningβ36Updated 2 years ago
- A unified benchmark for math reasoningβ88Updated 2 years ago
- Code and data for paper "Context-faithful Prompting for Large Language Models".β41Updated 2 years ago
- the instructions and demonstrations for building a formal logical reasoning capable GLMβ54Updated last year
- β28Updated last year
- Benchmarking Complex Instruction-Following with Multiple Constraints Composition (NeurIPS 2024 Datasets and Benchmarks Track)β94Updated 7 months ago
- The official repository for the paper "From Zero to Hero: Examining the Power of Symbolic Tasks in Instruction Tuning".β66Updated 2 years ago
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)β59Updated last year
- πΌ Official implementation of Dynamic Data Mixing Maximizes Instruction Tuning for Mixture-of-Expertsβ40Updated last year
- β103Updated last year
- Do Large Language Models Know What They Donβt Know?β99Updated 11 months ago
- [ACL 2024] FollowBench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Modelsβ114Updated 3 months ago