TianduoWang / MsATLinks
[ACL 2023] Learning Multi-step Reasoning by Solving Arithmetic Tasks. https://arxiv.org/abs/2306.01707
☆24Updated 2 years ago
Alternatives and similar repositories for MsAT
Users that are interested in MsAT are comparing it to the libraries listed below
Sorting:
- Official codebase for “In-Context Learning with Many Demonstration Examples”☆16Updated 2 years ago
- ☆75Updated last year
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆59Updated last year
- This code accompanies the paper DisentQA: Disentangling Parametric and Contextual Knowledge with Counterfactual Question Answering.☆16Updated 2 years ago
- ☆86Updated 2 years ago
- ☆176Updated last year
- NAACL 2021: Are NLP Models really able to Solve Simple Math Word Problems?☆134Updated 3 years ago
- ☆13Updated last year
- ☆28Updated last year
- ☆32Updated 3 years ago
- ☆24Updated 3 years ago
- ☆15Updated 3 years ago
- ☆75Updated last year
- ☆64Updated 2 years ago
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆63Updated last year
- ☆69Updated last year
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆117Updated last year
- ☆38Updated last year
- Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.☆131Updated 2 years ago
- [EMNLP 2022] TaCube: Pre-computing Data Cubes for Answering Numerical-Reasoning Questions over Tabular Data☆17Updated 2 years ago
- Provides a minimal implementation to extract FLAN datasets for further processing☆11Updated 2 years ago
- [NeurIPS'22 Spotlight] Data and code for our paper CoNT: Contrastive Neural Text Generation☆153Updated 2 years ago
- ACL'23: Unified Demonstration Retriever for In-Context Learning☆37Updated last year
- ☆82Updated 2 years ago
- ☆54Updated last year
- EMNLP'2023: Explore-Instruct: Enhancing Domain-Specific Instruction Coverage through Active Exploration☆36Updated last year
- Implementation of "The Power of Scale for Parameter-Efficient Prompt Tuning"☆58Updated 3 years ago
- WikiWhy is a new benchmark for evaluating LLMs' ability to explain between cause-effect relationships. It is a QA dataset containing 9000…☆47Updated last year
- ☆42Updated last year
- Question-Directed Graph Attention Network for Numerical Reasoning over Text☆10Updated 5 years ago